[jira] Commented: (CONNECTORS-56) All features should be accessible through an API

2010-08-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CONNECTORS-56?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12902940#action_12902940
 ] 

Mark Miller commented on CONNECTORS-56:
---

bq. HTTP methods other than GET or PUT are in fact poorly supported in many 
HTTP clients, including Apache Commons HTTPClient.

That's untrue.

bq.  I am also unsure of whether Jetty supports the DELETE method at the 
servlet level.

Jetty has no issues with DELETE, POST, PUT, or GET. Nor does Tomcat or any 
other container I have seen.

bq. I therefore think your suggestion would potentially cause a great deal of 
headache for no tangible benefit.

Again, I don't agree - it would cause less headaches, as REST is somewhat of a 
standard rather than an ad hoc api. There are many advantages to having a 
consistent RESTful api.

 All features should be accessible through an API
 

 Key: CONNECTORS-56
 URL: https://issues.apache.org/jira/browse/CONNECTORS-56
 Project: Apache Connectors Framework
  Issue Type: Sub-task
  Components: Framework core
Reporter: Jack Krupansky
Assignee: Karl Wright

 LCF consists of a full-featured crawling engine and a full-featured user 
 interface to access the features of that engine, but some applications are 
 better served with a full API that lets the application control the crawling 
 engine, including creation and editing of connections and creation, editing, 
 and control of jobs. Put simply, everything that a user can accomplish via 
 the LCF UI should be doable through an LCF API. All LCF objects should be 
 queryable through the API.
 A primary use case is Solr applications which currently use Aperture for 
 crawling, but would prefer the full-featured capabilities of LCF as a 
 crawling engine over Aperture.
 I do not wish to over-specify the API in this initial description, but I 
 think the LCF API should probably be a traditional REST API., with some of 
 the API elements specified via the context path, some parameters via URL 
 query parameters, and complex, detailed structures as JSON (or similar.). The 
 precise details of the API are beyond the scope of this initial description 
 and will be added incrementally once the high-level approach to the API 
 becomes reasonably settled.
 A job status and event reporting scheme is also needed in conjunction with 
 the LCF API. That requirement has already been captured as CONNECTORS-41.
 The intention for the API is to create, edit, access, and control all of the 
 objects managed by LCF. The main focus is on repositories, jobs, and status, 
 and less about document-specific crawling information, but there may be some 
 benefit to querying crawling status for individual documents as well.
 Nothing in this proposal should in any way limit or constrain the features 
 that will be available in the LCF UI. The intent is that LCF should continue 
 to have a full-featured UI, but in addition to a full-featured API.
 Note: This issue is part of Phase 2 of the CONNECTORS-50 umbrella issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CONNECTORS-56) All features should be accessible through an API

2010-07-14 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/CONNECTORS-56?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12888377#action_12888377
 ] 

Jack Krupansky commented on CONNECTORS-56:
--

Some cURL and/or Perl test scripts to illustrate use of the API would be 
helpful.

 All features should be accessible through an API
 

 Key: CONNECTORS-56
 URL: https://issues.apache.org/jira/browse/CONNECTORS-56
 Project: Lucene Connector Framework
  Issue Type: Sub-task
  Components: Framework core
Reporter: Jack Krupansky

 LCF consists of a full-featured crawling engine and a full-featured user 
 interface to access the features of that engine, but some applications are 
 better served with a full API that lets the application control the crawling 
 engine, including creation and editing of connections and creation, editing, 
 and control of jobs. Put simply, everything that a user can accomplish via 
 the LCF UI should be doable through an LCF API. All LCF objects should be 
 queryable through the API.
 A primary use case is Solr applications which currently use Aperture for 
 crawling, but would prefer the full-featured capabilities of LCF as a 
 crawling engine over Aperture.
 I do not wish to over-specify the API in this initial description, but I 
 think the LCF API should probably be a traditional REST API., with some of 
 the API elements specified via the context path, some parameters via URL 
 query parameters, and complex, detailed structures as JSON (or similar.). The 
 precise details of the API are beyond the scope of this initial description 
 and will be added incrementally once the high-level approach to the API 
 becomes reasonably settled.
 A job status and event reporting scheme is also needed in conjunction with 
 the LCF API. That requirement has already been captured as CONNECTORS-41.
 The intention for the API is to create, edit, access, and control all of the 
 objects managed by LCF. The main focus is on repositories, jobs, and status, 
 and less about document-specific crawling information, but there may be some 
 benefit to querying crawling status for individual documents as well.
 Nothing in this proposal should in any way limit or constrain the features 
 that will be available in the LCF UI. The intent is that LCF should continue 
 to have a full-featured UI, but in addition to a full-featured API.
 Note: This issue is part of Phase 2 of the CONNECTORS-50 umbrella issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.