[ https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14242041#comment-14242041 ]
Noble Paul edited comment on SOLR-6787 at 12/11/14 2:50 AM: ------------------------------------------------------------ bq.but I don't understand why there needs to be special APIs. These are not really "special" APIs. It is just another requesthandler which anyone can register in any core. bq.We should be able to support binary field types, and if our support is lacking yes. But it also needs to do more things to make this usable. This requesthandler really makes that very convenient. I don't see how can a generic binary field API be as useful. Suggestions are welcome bq.But can a request handler / component in solrconfig.xml make use of a jar in .system? Does this mean that .system will somehow need to come up before every other collection in a cloud setup All the handlers loaded from {{.system}} will be automatically be {{startup="lazy}} . So , any request fired to those handlers must respond with " .system collection not yet available " , till {{.system}} is loaded bq. Does this stuff relate at all to the goal of providing a smaller download and having an easier plugin mechanism for the stuff that's in contrib No , this is not conceived for a smaller download. I don't yet plan to make the contribs plugged in through this The usecase is this. I have a fairly large solrcloud cluster where I deploy a custom component. The current solution is to go to all the nodes and put a jar file there and do a rolling restart of the entire cluster. And, for every new version of the component , the user has to go through the same steps. For Lucidworks , it is a fairly common usecase and will make our product easier to manage The other usecase is to manage other files like synonyms / stopwords (or any other files required by any other component) in Solr so that we don't load very large files into Zookeeper bq. In some ways it feels like we're starting from the bottom up (which can be a fine approach) without the use-cases / high level designs / goals We are rethinking the way Solr is being used. The objective is to make it less painful to do what we experts can do with Solr. I'm glad that people are asking . NO , you haven't missed anything . This JIRA is the first piece of documentation ever to happen on this topic and all questions are welcome . Let's build it together was (Author: noble.paul): bq.but I don't understand why there needs to be special APIs. These are not really "special" APIs. It is just another requesthandler which anyone can register in any core. bq.We should be able to support binary field types, and if our support is lacking yes. But it also needs to do more things to make this usable. This requesthandler really makes that very convenient. I don't see how can a generic binary field API be as useful. Suggestions are welcome bq.But can a request handler / component in solrconfig.xml make use of a jar in .system? Does this mean that .system will somehow need to come up before every other collection in a cloud setup All the handlers loaded from {{.system}} will be automatically be {{startup="lazy}} . So , any request fired to those handlers must respond with " .system collection not yet available " , till {{.system}} is loaded .bq Does this stuff relate at all to the goal of providing a smaller download and having an easier plugin mechanism for the stuff that's in contrib No , this is not conceived for a smaller download. I don't yet plan to make the contribs plugged in through this The usecase is this. I have a fairly large solrcloud cluster where I deploy a custom component. The current solution is to go to all the nodes and put a jar file there and do a rolling restart of the entire cluster. And, for every new version of the component , the user has to go through the same steps. For Lucidworks , it is a fairly common usecase and will make our product easier to manage The other usecase is to manage other files like synonyms / stopwords (or any other files required by any other component) in Solr so that we don't load very large files into Zookeeper bq.In some ways it feels like we're starting from the bottom up (which can be a fine approach) without the use-cases / high level designs / goals We are rethinking the way Solr is being used. The objective is to make it less painful to do what we experts can do with Solr. I'm glad that people are asking . NO , you haven't missed anything . This JIRA is the first piece of documentation ever to happen on this topic and all questions are welcome . Let's build it together > API to manage blobs in Solr > ---------------------------- > > Key: SOLR-6787 > URL: https://issues.apache.org/jira/browse/SOLR-6787 > Project: Solr > Issue Type: Sub-task > Reporter: Noble Paul > Assignee: Noble Paul > Fix For: 5.0, Trunk > > Attachments: SOLR-6787.patch, SOLR-6787.patch > > > A special collection called .system needs to be created by the user to > store/manage blobs. The schema/solrconfig of that collection need to be > automatically supplied by the system so that there are no errors > APIs need to be created to manage the content of that collection > {code} > #create a new jar or add a new version of a jar > curl -X POST -H 'Content-Type: application/octet-stream' --data-binary > @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent > # GET on the end point would give a list of jars and other details > curl http://localhost:8983/solr/.system/blob > # GET on the end point with jar name would give details of various versions > of the available jars > curl http://localhost:8983/solr/.system/blob/mycomponent > # GET on the end point with jar name and version with a wt=filestream to get > the actual file > curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream > > mycomponent.1.jar > # GET on the end point with jar name and wt=filestream to get the latest > version of the file > curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream > > mycomponent.jar > {code} > Please note that the jars are never deleted. a new version is added to the > system everytime a new jar is posted for the name. You must use the standard > delete commands to delete the old entries -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org