Query method if there are
> multiple ids found?
>
>
>
> For eg, I am using deleteByQuery("12");
>
>
>
> Can I specify field name here as in search and delete only those indices
> belonging to a field.
>
>
>
> Like deleteByQuery("ID:12");
>
>
>
> Is this allowed?
>
>
>
> Thanks
>
> -Raghu
>
>
--
--Noble Paul
o that suggested that there MIGHT be another way to do this.
>
>
>
> Any info appreciated on doing this sort of distributed search.
>
>
>
> thx
>
>
--
--Noble Paul
the map after evaluating the value of
'x.id' as ${x.ID}
Then for subsequent requests it looks
>
> What are some dataset sizes that have been tested using this framework and
> what are some performance metrics?
>
> Thanks again
> Amit
>
> On Tue, Nov 25, 2008 at 7:3
ted in different circumstances, memory overhead or way that the data is
> brought from the database into Java it would be much appreciated for it's
> important for my application.
>
> Thanks in advance!
> Amit
>
--
--Noble Paul
---
>
> 2008-nov-25 12:25:25 org.apache.solr.handler.dataimport.SolrWriter upload
> VARNING: Error creating document :
> SolrInputDocument[{PUBID=PUBID(1.0)={43392}}]
>
> org.apache.solr.common.SolrException: ERROR:unknown field 'PUBID'
>at
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:274)
>
> ...
>
> ---
>
> Anyone who had similar problems or knows how to solve this!? Any help is
> truly appreciated!!
>
> // Joel
>
--
--Noble Paul
We considered these. The severity of errors are very much specific to
the source of data. It is very unlikely that a DB source throws up
errors. In xml data sources say out of x urls 1 or two are wrong,
would the user wish to ignore or want to abort the entire import.
So we decided to give more options and the implementations are left to
the EntityProcessor. Moreover the default is set to onError=abort
>
>
>
> -Hoss
>
>
--
--Noble Paul
L tool pretending to be a RequestHandler. Originally it
was built to run outside of Solr using SolrJ. For better integration
and ease of use we changed it later.
SOLR-853 aims to achieve the oroginal goal
The goal of DIH is to become a full featured ETL tool.
>
>
>
> -Hoss
>
>
--
--Noble Paul
.nabble.com/DataImportHanler-JDBC-case-problems-tp20617216p20617216.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
the-SOLR-Cores--tp20596015p20596015.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
AIL PROTECTED]> wrote:
> Schema:
>
>
> DIH:
>
>
> The column is uppercase ... isn't there some automagic happening now where
> DIH will introspect the fields @ load time?
>
> - Jon
>
> On Nov 19, 2008, at 11:11 PM, Noble Paul നോബിള് नोब्ळ् wrote:
>
uery and in turn has to insert multiple values to
> the same field during each invocation. It should do something similar
> to the RegexTransformer (field splitBy)... is that possible? Right now
> I have to use a workaround that includes the term-duplication on the
> database sides, which is kinda ugly if a term has to be duplicated a
> lot.
> Greetings,
> Steffen
>
--
--Noble Paul
ERROR:unknown field 'DOCTYPE'
>>>
>>> Even though it is declared in schema.xml (lowercase), before I grep
>>> replace the entire file would that be my issue?
>>>
>>> Thanks.
>>>
>>> - Jon
>>
>
>
--
--Noble Paul
a look at the DateFormatTransformer. You can find documentation on
>> the
>> DataImportHandler wiki.
>>
>> http://wiki.apache.org/solr/DataImportHandler
>>
>> On Tue, Nov 18, 2008 at 10:41 PM, con <[EMAIL PROTECTED]> wrote:
>>
>>>
t; > >
>> > >
>> > > I tried also to configure all dataimport settings in solrconfig.xml,
>> but
>> > I don't know how to do this exactly. Among other things, I tried this
>> > format:
>> > >
>> > >
>> > > *** s
ation for the
>>> >>>> construction of a Lucene index from an arbitrary SQL query of a
>>> >>>> JDBC-accessible SQL database. It allows a user to control a number of
>>> >>>> parameters, including the SQL query to use, individual
>>> >>>> indexing/storage/term-vector nature of fields, analyzer, stop word
>>> >>>> list, and other tuning parameters. In its default mode it uses
>>> >>>> threading to take advantage of multiple cores.
>>> >>>>
>>> >>>> LuSql can handle complex queries, allows for additional per record
>>> >>>> sub-queries, and has a plug-in architecture for arbitrary Lucene
>>> >>>> document manipulation. Its only dependencies are three Apache Commons
>>> >>>> libraries, the Lucene core itself, and a JDBC driver.
>>> >>>>
>>> >>>> LuSql has been extensively tested, including a large 6+ million
>>> >>>> full-text & metadata journal article document collection, producing
>>> >>>> an
>>> >>>> 86GB Lucene index in ~13 hours.
>>> >>>>
>>> >>>> http://lab.cisti-icist.nrc-cnrc.gc.ca/cistilabswiki/index.php/LuSql
>>> >>>>
>>> >>>> Glen Newton
>>> >>>>
>>> >>>> --
>>> >>>>
>>> >>>> -
>>> >>>>
>>> >>>> -
>>> >>>> To unsubscribe, e-mail: [EMAIL PROTECTED]
>>> >>>> For additional commands, e-mail: [EMAIL PROTECTED]
>>> >>>>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >>
>>> >> -
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>>
>>> -
>>>
>>
>>
>>
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>>
>
>
>
> --
>
> -
>
--
--Noble Paul
> So is it possible to set this new date format in some config files.
>
> Expecting suggestions/advices
> Thanks in advance
> con
> --
> View this message in context:
> http://www.nabble.com/Error-in-indexing-timestamp-format.-tp20556862p20556862.html
> Sent from the Sol
first role
>> of
>> security is to prevent accidents.
>>
>> I would suggest two layers of "read-only" switch. 1) Open the Lucene index
>> in read-only mode. 2) Allow only search servers to accept GET requests.
>>
>> Lance
>>
>>
>
--
--Noble Paul
es.ErrorReportValve.invoke(ErrorReportValve.ja
>> > va:102)
>> > at
>> > org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValv
>> > e.java:109)
>> > at
>> > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java
>> > :286)
>> > at
>> > org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor
>> > .java:857)
>> > at
>> > org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.pro
>> > cess(Http11AprProtocol.java:565) at
>> > org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:150
>> > 9) at java.lang.Thread.run(Unknown Source) Caused by:
>> > org.apache.solr.handler.dataimport.DataImportHandlerException:
>> > Exception occurred while initializing context Processing Document # at
>> > org.apache.solr.handler.dataimport.DataImporter.loadDataConfig(DataImp
>> > orter.java:176)
>> > at
>> > org.apache.solr.handler.dataimport.DataImporter.(DataImporter.ja
>> > va:93)
>> > at
>> > org.apache.solr.handler.dataimport.DataImportHandler.inform(DataImport
>> > Handler.java:106) ... 17 more Caused by:
>> > org.xml.sax.SAXParseException: The value of attribute "regex"
>> > associated with an element type "field" must not contain the '<'
>> > character. at
>> > com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown
>> > Source) at
>> > com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unkn
>> > own
>> > Source) at
>> > org.apache.solr.handler.dataimport.DataImporter.loadDataConfig(DataImp
>> > orter.java:166) ... 19 more ) that prevented it from fulfilling this
>> > request.*
>> >
>> > I appreciate your help.
>> >
>> > Regards,
>> > ahmd
>> >
>> >
>>
>
--
--Noble Paul
n solrconfig.xml)
> failed:
>
>
> ...
> INFO: Reusing parent classloader
> Nov 17, 2008 2:18:14 PM org.apache.solr.common.SolrException log
> SEVERE: Error in solrconfig.xml:org.apache.solr.common.SolrException: No
> system property or default value specified for xmlFile.fileAbsolutePath
>at
> org.apache.solr.common.util.DOMUtil.substituteProperty(DOMUtil.java:311)
>at
> org.apache.solr.common.util.DOMUtil.substituteProperties(DOMUtil.java:264)
> ...
>
>
>
> Thanks again for your excellent support!
>
> Gisto
>
> --
> Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
> Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
>
--
--Noble Paul
meDir,
>> configFile );
>> mySolrCore = myCoreContainer.getCore("core_de");
>>RefCounted temp_search =
>> mySolrCore.getSearcher();
>> SolrIndexSearcher searcher = temp_search.get();
>
> No one ever worked directly with CoreContainer and SolrIndexSearcher ?
>
> Greets -Ralf-
>
--
--Noble Paul
.xml-tp20483691p20483691.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
>
> Hope my explanation is more clear now...
> Thank's in advanced!
>
>
> --
> View this message in context:
> http://www.nabble.com/using-deduplication-with-dataimporthandler-tp20536053p20536053.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
ryIndexReader.java:188)
> at
> org.apache.lucene.index.DirectoryIndexReader.reopen(DirectoryIndexReader.java:124)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1016)
> ... 14 more
> Nov 14, 2008 5:38:52 AM org.apache.solr.update.DirectUpdateHandler2 commit
>
> Any ideas, anyone?
>
> -- Bill
--
--Noble Paul
to do with -- protecting people from field name typos
> and returning errors instead of silently ignoring unexpected input is
> fairly important behavir -- especially for new users.
>
Actually it is done by DIH . When the dataconfig is loaded DIH reports
these information on the console. though it is limited , it helps to a
certain extent
> -Hoss
>
>
--
--Noble Paul
with results.
> --
> View this message in context:
> http://www.nabble.com/DataImportHandler%2C-custom-properties-tp20482190p2049
> 8600.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
>
--
--Noble Paul
nIncrementGap="100">
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> positionIncrementGap="100" >
>
>
> igno
ot
> updated...
>
> Any suggestion? Have done many tests but no way...
>
> --
> View this message in context:
> http://www.nabble.com/troubles-with-delta-import-tp20498449p20498449.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
.3 release had a bug with handling dynamic fields (SOLR-742)
>
> I smell a bug somewhere, i just don't know enough about the code to know
> where hte smell is coming from.
>
> -Hoss
>
>
--
--Noble Paul
few
> minutes to commit the indexed data.
>
>
> On Tue, Nov 11, 2008 at 11:35 PM, Noble Paul നോബിള് नोब्ळ् <
> [EMAIL PROTECTED]> wrote:
>
>> why is the id field multivalued? is there a uniqueKey in the schema ?
>> Are you sure there are no duplicates?
>&
The JdbcDataSource can run any query even updates and deletes
On Thu, Nov 13, 2008 at 9:27 AM, Noble Paul നോബിള് नोब्ळ्
<[EMAIL PROTECTED]> wrote:
> DIH can delete rows from the index. look at the 'deletedPkQuery' option .
> http://wiki.apache.org/solr/
ImportHandler.
>
> Is to do an implementation of the DataSource the best way to do this task?
> Is there a better way?
>
> Thanks for everything!!!
> --
> View this message in context:
> http://www.nabble.com/indexing-data-and-deleting-from-index-and-database-tp20466411p20466411.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
fine these fields as:
>
> multiValued="true"/>
> multiValued="true" required="false"/>
> multiValued="true" required="false"/>
>
>
> If I try to index just one field (id), then it indexes about 96 records,
> but if I try to index all the above three fields, it indexes only 615360
> records.
>
> Any help will be appreciated.
>
> thanks!
>
--
--Noble Paul
nsformer can use the same
> notation as in the query attribute (e.g. ${parententityname.fieldname} ) to
> access fields from parent entities, which allows them to merge data from
> multiple related rows, not just different columns.
>
> -Mauricio
>
>
>
> On Mon, Nov
advance
> Prabhu.K
>
> --
> View this message in context:
> http://www.nabble.com/How-to-create-Dynamic-core--tp20434246p20434246.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
t; −
>
> 1
> 125
> 0
> 2008-11-10 22:33:55
> −
>
> Indexing completed. Added/Updated: 125 documents. Deleted 0 documents.
>
> 2008-11-10 22:34:00
> 2008-11-10 22:34:00
> 0:0:5.79
>
> −
>
> This response format is experimental. It is likely to change in
al and used execution strings like:
> http://localhost:8983/solr/select/?indent=on&q=video&sort=price+desc
> etc however I'm working with sql data and not xml data.
>
> Thanks
>
> -Original Message-
> From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTEC
>
>
>
>
>
> stored="true"/>
> stored="true"/>
> stored="true"/>
>
>
> Thanks.
>
--
--Noble Paul
NTONIO, TX, 78229
> Ope…
>
>
> Here is my solconfig.xml
> …
> class="org.apache.solr.handler.dataimport.DataImportHandler">
>
> data-config.xml
>
>
> …
> Data-config.xml is in the same dir as solconfig.xml
>
> My data-config.xml is like any other:
>
> url="jdbc:sqlserver://:1433;databaseName=x" user="x"
> password="x" />
>
>deltaQuery="select id from dbo.jobs where lastmodified >
> '${dataimporter.last_index_time}'">
>
>
>
>
>
> I'm using win xp with apache – and jetty + solr 1.3.0
>
> Thanks
>
>
>
--
--Noble Paul
08 4:57:37 PM
>>> > org.apache.solr.core.SolrCore execute INFO: [video] webapp=null
>>> path=null
>>> > params={q=static+newSearcher+warming+query+from+solrconfig.xml}
>>> hits=1149
>>> > status=0 QTime=0
>>> > Nov 6 16:57:37 solr-test jsvc.exec[24862]: Nov 6, 2008 4:57:37 PM
>>> > org.apache.solr.core.QuerySenderListener newSearcher INFO:
>>> > QuerySenderListener done.
>>> > Nov 6 16:57:37 solr-test jsvc.exec[24862]: Nov 6, 2008 4:57:37 PM
>>> > org.apache.solr.core.SolrCore registerSearcher INFO: [video] Registered
>>> > new searcher [EMAIL PROTECTED] main
>>> > Nov 6 16:57:37 solr-test jsvc.exec[24862]: Nov 6, 2008 4:57:37 PM
>>> > org.apache.solr.search.SolrIndexSearcher close INFO: Closing
>>> > [EMAIL PROTECTED] main
>>> >
>>> ^IfilterCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
>>> >
>>> ^IqueryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=5,evictions=0,size=5,warmupTime=50,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
>>> >
>>> ^IdocumentCache{lookups=0,hits=0,hitratio=0.00,inserts=20,evictions=0,size=20,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
>>> > Nov 6 16:57:59 solr-test jsvc.exec[24862]: Nov 6, 2008 4:57:59 PM
>>> > org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
>>> > path=/dataimport params={} status=0 QTime=0
>>> >
>>> >
>>>
>>> --
>>> View this message in context:
>>> http://www.nabble.com/solr-1.3---Problem-Full-Import-tp20364250p20421643.html
>>> Sent from the Solr - User mailing list archive at Nabble.com.
>>>
>>>
>>
>>
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>>
>>
>
> --
> View this message in context:
> http://www.nabble.com/solr-1.3---Problem-Full-Import-tp20364250p20422919.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
> I figure if anyone knows, someone on this list knows. Thanks for any
>> info!
>>
>> --
>> Steve
>
> --
> Grant Ingersoll
>
> Lucene Helpful Hints:
> http://wiki.apache.org/lucene-java/BasicsOfPerformance
> http://wiki.apache.org/lucene-java/LuceneFAQ
>
>
>
>
>
>
>
>
>
>
--
--Noble Paul
hints to solve this issue. Waiting for your
> responce,
> Thanking in advance
> con.
> --
> View this message in context:
> http://www.nabble.com/full-import-vs-delta-import-to-update-solr-index-tp20399673p20399673.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
em. Other cases: if Inner #2 is false, does Inner #3
> get run? Perhaps instead of true and false they could be
> ignore/break/continue where "ignoreerrors=break" means that an error in Inner
> #2 would prevent Inner #3.
>
> Lance
>
> -Original Message-
&g
nimize this risk, but is not liable for
>> any damage
>> you may sustain as a result of any virus in this e-mail. You should carry
>> out your
>> own virus checks before opening the e-mail or attachment. Infosys reserves
>> the
>> right to monitor and review the content of all messages sent to or from
>> this e-mail
>> address. Messages sent to or from this e-mail address may be stored on the
>> Infosys e-mail system.
>> ***INFOSYS End of Disclaimer INFOSYS***
>>
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
--
--Noble Paul
sue an command to solr delete
> q=*:* commit and than start the indexing.
> For incremental operation we just take the data and the operation specified.
>
> Kindly let me know if there exist a smarter way to get this working.
>
> --Thanks and Regards
> Vaijanath
>
--
--Noble Paul
OK .you can raise an issue anyway
On Fri, Nov 7, 2008 at 7:03 PM, Steven Anderson <[EMAIL PROTECTED]> wrote:
> Ideally, it would be a configuration option.
>
> Also, it would be great to have a hook to log or process an exception.
>
> Steve
>
>
> -Original M
>
>> nGeographicLocations : 39298 distincts values
>> nPersonNames : 325142 distincts values
>> nOrganizationNames : 130681 distincts values
>> nCategories : 929 distincts values
>> nSimpleConcepts : 110198 distincts values
>> nComplexConcepts : 1508141 distincts values
>> Each of those fields has a maximum of 7 values per document.
>>
>> Query :
>>
>> http://olympe:1006/solr/select
>> ?sort=publishdate+asc
>> &facet=true
>> &facet.field=nPersonNames
>> &facet.field=nOrganizationNames
>> &facet.field=nGeographicLocations
>> &facet.field=nCategories
>> &f.nPersonNames.facet.limit=10
>> &f.nOrganizationNames.facet.limit=10
>> &f.nGeographicLocations.facet.limit=10
>> &f.nCategories.facet.limit=10
>> &version=2.2
>> &wt=json
>> &json.nl=map
>> &q=nFullText%3A"Random words"+AND+searchable%3Atrue
>> &start=0
>> &rows=10
>>
>> Every help will be appreciated because I don't know how to optimize the
>> query time response…:-(
>>
>> Regards,
>>
>> Cédric Houis
>> --
>> View this message in context:
>> http://www.nabble.com/Very-bad-performance-tp20366783p20366783.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>>
>
--
--Noble Paul
f options, but we need to determine which one
> will perform best.
>
> Thanks,
>
> A. Steven Anderson
> 410-418-9908 VSTI
> 443-790-4269 cell
>
>
>
>
--
--Noble Paul
itor and review the content of all messages sent to or from this
> e-mail
> address. Messages sent to or from this e-mail address may be stored on the
> Infosys e-mail system.
> ***INFOSYS End of Disclaimer INFOSYS***
>
--
--Noble Paul
rJ cannot directly index xml . You may need to read docs from xml
> before solrj can index it.
>
> Understood. We'd like to compare the performance difference of DIH vs.
> custom xml parsing + SolrJ.
good. share your observations
>
> A. Steven Anderson
> 410-418-9908 VSTI
> 443-790-4269 cell
>
>
>
--
--Noble Paul
message in context:
> http://www.nabble.com/solr1.3---tomcat---does-it-create-the-doc-if-no-data-in-sub-entite--tp20360579p20360579.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
find a good data set this
> size?
>
> I saw the wikipedia dump reference in the DIH wiki, but that is only in
> the 7M+ doc range.
>
> Any suggestions would be greatly appreciated.
>
> Thanks,
>
> Steve
>
>
>
--
--Noble Paul
tHandler.java:106)
> ... 17 more Caused by: org.xml.sax.SAXParseException: The value of attribute
> "regex" associated with an element type "field" must not contain the '<'
> character. at
> com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source)
> at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown
> Source) at
> org.apache.solr.handler.dataimport.DataImporter.loadDataConfig(DataImporter.java:166)
> ... 19 more ) that prevented it from fulfilling this request.*
>
> I appreciate your help.
>
> Regards,
> ahmd
>
--
--Noble Paul
doc.setField("score", sdoc.score);
>}
>rb._responseDocs.set(sdoc.positionInResponse, doc);
> }
>
> Any idea?
>
> L.M.
>
>
> 2008/11/5 Noble Paul നോബിള് नोब्ळ् <[EMAIL PROTECTED]>
>>
>> the 'fl' parameter ca
meone please guide
> me to some documentation that can help me achieve this and write out my own
> start.jar file.
>
> Regards,
> Muhammed Sameer
>
>
>
>
--
--Noble Paul
lts (no
> results, actually). It all works well using only one shard, but it was very
> difficult to benchmark it.
>
> Any advice? I hope I'm missing something.
>
> Thanks,
>
> L.M.
>
--
--Noble Paul
an off-by-one bug of some sort in the DIH code.
>
> Thanks,
>
> Lance
>
> -Original Message-
> From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
> Sent: Saturday, November 01, 2008 7:44 PM
> To: solr-user@lucene.apache.org
> Subject: Re: DIH Http input
70)
> ^Iat
> org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1579)
> ^Iat
> org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1559)
> ^Iat java.lang.Thread.run(Thread.java:619)
>
> --
> View this message in context:
> http://www.nabble.com/MySql---Solr-1.3---Tomcat55-Full-Import-for-8%2C5M-of-data-%3E%3E-Exception-in-thread-%22Thread-33%22-tp20305986p20305986.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
t; have to use the patch that complicates the process I guess when its
>>> kind of tempting to try to use Solr as much vanilla as possible.
>>>
>>> What is the general method when using database information?
>>>
>>
>>
>>
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>>
>>
>
> --
> View this message in context:
> http://www.nabble.com/XML-vs-mysql-import-with-DataImportHandler-tp17754471p20305117.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
example would be:
>
> />
>
> (The script would basically contain just a callback - getData(String query)
> that results in an array set or might set values on it's children, etc)
>
> - Jon
>
> On Nov 3, 2008, at 12:40 AM, Noble Paul നോബിള് नोब्ळ् wrote:
>
>
/update too (the
method is same getData()). The second entity can read from db and
create docs (see Jon baer's suggestion) using the
XPathEntityProcessor as a sub-entity
--Noble
On Mon, Nov 3, 2008 at 9:44 AM, Noble Paul നോബിള് नोब्ळ्
<[EMAIL PROTECTED]> wrote:
> Hi Lance,
> Do
Hi Lance,
Do a full import w/o debug and let us know if my suggestion worked
(rootEntity="false" ) . If it didn't , I can suggest u something else
(Writing a Transformer )
On Sun, Nov 2, 2008 at 8:13 AM, Noble Paul നോബിള് नोब्ळ्
<[EMAIL PROTECTED]> wrote:
> If you wish t
gt;>> ava:18
>>> 3)
>>> at
>>>
>>> org.apache.solr.handler.dataimport.XPathEntityProcessor.initQuery(XPat
>>> hEntit
>>> yProcessor.java:210)
>>> at
>>>
>>> org.apache.solr.handler.dataimport.XPathEntityProcessor.fetchNextRow(X
>>> PathEn
>>> tityProcessor.java:180)
>>> at
>>>
>>> org.apache.solr.handler.dataimport.XPathEntityProcessor.nextRow(XPathE
>>> ntityP
>>> rocessor.java:160)
>>> at
>>>
>>>
>> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.j
>> ava:
>>>
>>> 285)
>>> ...
>>> Oct 31, 2008 11:21:20 PM org.apache.solr.handler.dataimport.DocBuilder
>>> buildDocument
>>> SEVERE: Exception while processing: album document :
>>> SolrInputDocumnt[{name=name(1.0)={Groups of stuff}}]
>>> org.apache.solr.handler.dataimport.DataImportHandlerException:
>>> Exception in invoking url null Processing Document # 11
>>> at
>>>
>>> org.apache.solr.handler.dataimport.HttpDataSource.getData(HttpDataSour
>>> ce.jav
>>> a:115)
>>> at
>>>
>>> org.apache.solr.handler.dataimport.HttpDataSource.getData(HttpDataSour
>>> ce.jav
>>> a:47)
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>
>
--
--Noble Paul
essing: album document :
> SolrInputDocumnt[{name=name(1.0)={Groups of stuff}}]
> org.apache.solr.handler.dataimport.DataImportHandlerException: Exception in
> invoking url null Processing Document # 11
> at
> org.apache.solr.handler.dataimport.HttpDataSource.getData(HttpDataSource.jav
> a:115)
>at
> org.apache.solr.handler.dataimport.HttpDataSource.getData(HttpDataSource.jav
> a:47)
>
>
>
>
>
>
--
--Noble Paul
bset of the
> xpath syntax. Which streaming parser is it and where would I find this
> subset documented? I tried a few things like the "the first entry" and
> "length of data > 3" and nothing worked. "/a/b/c" was all that worked for
> me.
>
> Tx,
>
> Lance
>
--
--Noble Paul
page which captures
> question on out of memory exception with both MySQL and MS SQL Server
> drivers.
>
> http://wiki.apache.org/solr/DataImportHandler#faq
>
> On Thu, Jun 26, 2008 at 9:36 AM, Noble Paul നോബിള് नोब्ळ्
> <[EMAIL PROTECTED]> wrote:
>> We must document
uite more detail about the problem you're trying to
> solve to get useful recommendations.
>
> Best
> Erick
>
> On Thu, Oct 30, 2008 at 8:50 AM, Raghunandan Rao <
> [EMAIL PROTECTED]> wrote:
>
>> Thanks Noble.
>>
>> So you mean to say that I need to create
t' throws
> away the first 100. 'delta-import' is not implemented. What is the special
> trick here? I'm using the Solr-1.3.0 release.
>
> Thanks,
>
> Lance Norskog
>
--
--Noble Paul
>
> -Original Message-
> From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
> Sent: Thursday, October 30, 2008 6:16 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Using Solrj
>
> hi ,
> There are two sides to this .
> 1. indexing (getting data into So
with 3 entities). How do I do that?
>
>
>
> Does anyone have any idea?
>
>
>
> Thanks
>
>
--
--Noble Paul
library doesn't make it "standard".
>
> : >> open a JIRA issue. we
> : > will use a gzip on both ends of the pipe . On
> : > the slave
> : >> side you can
> : > say
> : > true
> : > as an extra option to compress and
> : >> send
> : > data from server
> : > --Noble
>
>
> -Hoss
>
>
--
--Noble Paul
hing when compression is standard in HTTP? --wunder
>
> On 10/29/08 4:35 AM, "Noble Paul നോബിള് नोब्ळ्" <[EMAIL PROTECTED]>
> wrote:
>
>> open a JIRA issue. we will use a gzip on both ends of the pipe . On
> the slave
>> side you can say
> true
> a
solely those of the author and do not necessarily
> represent those of the company. (PAVD001)
> Shoe-shop.com Limited is a company registered in England and Wales with
> company number 03817232. Vat Registration GB 734 256 241. Registered Office
> Catherine House, Northminster Business P
but is a bit slower lookup.
>
> The doc of HashDocSet says "t can be a better choice if there are few
> docs in the set" . What does 'few' means in this context ?
>
> Cheers !
>
> Jerome.
>
>
> --
> Jerome Eteve.
>
> Chat with me live at http://www.eteve.net
>
> [EMAIL PROTECTED]
>
--
--Noble Paul
fly
> without saving anything to disk. 'Rcopy' in particular has features to only
> copy what is not already at the target. The Putty suite 'pscp' program also
> has the compression feature.
>
> Lance
>
> -Original Message-
> From: Noble Paul നോബിള്
n get this down to
>> about 10% of the size.
>>
>>
>>
>> Is compression in the pipeline / can it be if not!
>>
>>
>>
>> simon
>>
>>
>>
>> This message has been scanned for malware by SurfControl plc.
>> www.surfcontrol.com
>>
>
>
>
> --
> --Noble Paul
>
--
--Noble Paul
op.com Limited is a company registered in England and Wales with
> company number 03817232. Vat Registration GB 734 256 241. Registered Office
> Catherine House, Northminster Business Park, Upper Poppleton, YORK, YO26
> 6QU.
>
> This message has been scanned for malware by SurfControl plc.
> www.surfcontrol.com
--
--Noble Paul
>
> Thanks,
> - Bill
>
> --
> From: "Noble Paul ??? ??" <[EMAIL PROTECTED]>
> Sent: Thursday, October 23, 2008 10:34 PM
> To:
> Subject: Re: Advice needed on master-slave configuration
>
>> It was committed on 10/21
>>
>> take the latest 1
e if not!
>
>
>
> simon
>
>
>
> This message has been scanned for malware by SurfControl plc.
> www.surfcontrol.com
>
--
--Noble Paul
erver.commit();
> System.out.println("Data commited");
> }
> /**
>* @param args
>* @throws IOException
> * @throws SolrServerException
>*/
> public static void main(String[] args) throws SolrServerException,
> IOException
> {
> Upload test = new Upload ();
> test.addData();
> }
>
> }
>
>
> Can someone help ?
>
> Thanks
>
>
--
--Noble Paul
t; termVectors="true" />
> multiValued="true" omitNorms="true" termVectors="true" />
>
> Hope that makes it a bit clearer. Thanks.
>
> Kind regards,
>
> Nick
> --
> View this message in context:
> http://www.nabble.com/How-to-search-a-DataImportHandler-solr-index-tp20120698p20149960.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
> 4.)
> And my last questions about Solr statistics/informations...
>
> ===> Is it possible to get informations (number of indexed documents, stored
> values from documents etc.) from the current lucene index?
> ===> The admin webinterface shows 'numDocs' and 'maxDoc' in
> 'statistics/core'. Is 'numDocs' = number of indexed documents? What means
> 'maxDocs'?
>
>
> Thanks a lot!
> gisto
> --
> GMX Kostenlose Spiele: Einfach online spielen und Spaß haben mit Pastry
> Passion!
> http://games.entertainment.gmx.net/de/entertainment/games/free/puzzle/6169196
>
--
--Noble Paul
tips are welcome.
> Thanks.
>
> Kind regards,
>
> Nick
> --
> View this message in context:
> http://www.nabble.com/How-to-search-a-DataImportHandler-solr-index-tp20120698p20145974.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
The feature depends on some other core changes which also needs to be
included in the patch. Moreover, the logging is changed from java
logging to slf4j
>
> Thanks,
> - Bill
>
> ------
> From: "Noble Paul ??? ??" <
in context:
> http://www.nabble.com/How-to-search-a-DataImportHandler-solr-index-tp20120698p20136424.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
ew this message in context:
>>> http://www.nabble.com/How-to-search-a-DataImportHandler-solr-index-tp20120698p20131412.html
>>> Sent from the Solr - User mailing list archive at Nabble.com.
>>>
>>
>>
>>
>
> --
> View this message in context:
> http://www.nabble.com/How-to-search-a-DataImportHandler-solr-index-tp20120698p20134681.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
need only this:
> public String getIndexDir() {
>return dataDir;
> }
>
>
> On Thu, Oct 23, 2008 at 4:54 PM, Noble Paul നോബിള് नोब्ळ् <
> [EMAIL PROTECTED]> wrote:
>
>> please go through this url once
>>
>> http://lucene.apache.org/solr/tutorial.html
&g
more
>
> Where do I get the nightly bits that will enable me to try this replication
> handler?
>
> Thanks,
> - Bill
>
> --
> From: "Noble Paul ??? ??" <[EMAIL PROTECTED]>
> Sent: Wednesday, October 22, 2
rer.com/
> http://www.sdbexplorer.com/
> http://www.chambal.com/
> http://www.minalyzer.com/
>
--
--Noble Paul
he
>>> distribution scripts.
>>>
>>> Every N hours when changes are committed and the index on U is updated, I
>>> want to copy the files from the master to the slave.Do I need to halt
>>> the solr server on Q while the index is being updated? If not, how do I
>>> copy the files into the data folder while the server is running? Any
>>> pointers would be greatly appreciated!
>>>
>>> Thanks!
>>>
>>> - Bill
>>
>>
>
--
--Noble Paul
a "test.xml" file and open it in a
> browser. If it isn't well-formed, you'll get an error.
>
> Here is my test XML:
>
>
>
> Here is what Firefox 3.0.3 says about that:
>
> XML Parsing Error: not well-formed
> Location: file:///Users/wunderwood/Desktop/test.xml
> Line Number 1, Column 18:
>
>
> -^
>
> wunder
>
>
--
--Noble Paul
http://wiki.apache.org/solr/DataImportHandler#head-eb523b0943596587f05532f3ebc506ea6d9a606b
On Wed, Oct 22, 2008 at 4:41 PM, sunnyfr <[EMAIL PROTECTED]> wrote:
>
> Can you tell me more about it ?
>
>
> Noble Paul നോബിള് नोब्ळ् wrote:
>>
>> you can try out a Trans
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
27;m french, what
> does that means ... is it when in one request you have several table and
> inner join between them ?
>
>
> Noble Paul നോബിള് नोब्ळ् wrote:
>>
>> do you have a nested entities? Then there is chance of firing too many
>> requests to
n(DataImporter.java:377)
>
> here is the full data-config:
>
>
> url="jdbc:postgresql://bm02:5432/bm" user="bm" />
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> what are we doing wrong?
> Florian
>
>
--
--Noble Paul
:285)
>> at
>> org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:211)
>> at
>> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:133)
>> at
>> org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:359)
>> at
>> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:388)
>> at
>> org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:377)
>>
>> here is the full data-config:
>>
>>
>> > url="jdbc:postgresql://bm02:5432/bm" user="bm" />
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> what are we doing wrong?
>> Florian
>>
>>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
--
--Noble Paul
>
>
> --
> View this message in context:
> http://www.nabble.com/MySql---Solr-1.3---Full-import%2C-how-to-make-request-pack-smaller--tp20066186p20066186.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
d.
>
> But when i invoke a *:* it is displaying only 9 records.
>
> Similarly, For another entity that indexes around 500 records, a *:* gives
> only 4 responces.
>
> Why this inconsistency ? how can I fix it before deploying it in actual
> production.
> Thanks
> con
R.userID = CUSTOMER.userID">
>
>
>
>
> query="select * from USER , MANAGER where USER.desig = MANAGER.desig">
>
>
>
>
> But the searching will not give all the results even if there is only one
> result. whereas indexing is fine.
> T
ML-format-for-multi-valued-fields--tp20015951p20015951.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
--Noble Paul
ex.Pattern$GroupHead.match(Pattern.java:4578)
>at java.util.regex.Pattern$LazyLoop.match(Pattern.java:4767)
>at java.util.regex.Pattern$GroupTail.match(Pattern.java:4637)
>at java.util.regex.Pattern$All.match(Pattern.java:4079)
>
> Thanks.
>
> - Jon
>
>
--
--Noble Paul
901 - 1000 of 1149 matches
Mail list logo