I have absolutely no idea when it comes to Drupal, the Drupal folks would be
much better equipped to answer.
Best,
Erick
> On Feb 21, 2019, at 8:16 AM, Greg Robinson wrote:
>
> Thanks for the feedback.
>
> So here is where I'm at.
>
> I first went ahead and deleted the existing core that was
Thanks for the feedback.
So here is where I'm at.
I first went ahead and deleted the existing core that was returning the
error using the following command: bin/solr delete -c new_solr_core
Now when I access the admin panel, there are no errors.
I then referred to the large "warning" box on
On 2/20/2019 11:07 AM, Greg Robinson wrote:
Lets try this: https://imgur.com/a/z5OzbLW
What I'm trying to do seems pretty straightforward:
1. Install Solr Server 7.4 on Linux (Completed)
2. Connect my Drupal 7 site to the Solr Server and use it for indexing
content
My understanding is that I
Gotcha.
Lets try this: https://imgur.com/a/z5OzbLW
What I'm trying to do seems pretty straightforward:
1. Install Solr Server 7.4 on Linux (Completed)
2. Connect my Drupal 7 site to the Solr Server and use it for indexing
content
My understanding is that I must first create a core in order to
Attachments generally are stripped by the mail server.
Are you trying to create a core as part of a SolrCloud _collection_? If so, this
is an anti-pattern, use the collection API commands. Shot in the dark.
Best,
Erick
> On Feb 19, 2019, at 3:05 PM, Greg Robinson wrote:
>
> I used the front
I used the front end admin (see attached)
thanks
On Tue, Feb 19, 2019 at 3:54 PM Erick Erickson
wrote:
> Hmmm, that’s not very helpful…..
>
> Don’t quite know what to say. There should be something more helpful
> in the logs.
>
> Hmmm, How did you create the core?
>
> Best,
> Erick
>
>
> > On
Hmmm, that’s not very helpful…..
Don’t quite know what to say. There should be something more helpful
in the logs.
Hmmm, How did you create the core?
Best,
Erick
> On Feb 19, 2019, at 1:29 PM, Greg Robinson wrote:
>
> Thanks for your direction regarding the log.
>
> I was able to locate it
Thanks for your direction regarding the log.
I was able to locate it and these two lines stood out:
Caused by: org.apache.solr.common.SolrException: Could not load conf for
core new_solr_core: Error loading solr config from
/home/solr/server/solr/new_solr_core/conf/solrconfig.xml
Caused by:
do a recursive seach for “solr.log" under SOLR_HOME…….
Best,
ERick
> On Feb 19, 2019, at 8:08 AM, Greg Robinson wrote:
>
> Hi Erick,
>
> Thanks for the quick response.
>
> Here is what is currently contained within the conf dir:
>
> drwxr-xr-x 2 root root 4096 Feb 18 17:51 lang
>
Hi Erick,
Thanks for the quick response.
Here is what is currently contained within the conf dir:
drwxr-xr-x 2 root root 4096 Feb 18 17:51 lang
-rw-r--r-- 1 root root 54513 Feb 18 17:51 managed-schema
-rw-r--r-- 1 root root 329 Feb 18 17:51 params.json
-rw-r--r-- 1 root root 894 Feb 18
Are all the other files there in your conf dir? Solrconfig.xml references
things like nanaged-schema etc.
Also, your log file might contain more clues...
On Tue, Feb 19, 2019, 08:03 Greg Robinson Hello,
>
> We have Solr 7.4 up and running on a Linux machine.
>
> I'm just trying to add a new
*Hello*
*The code which worked for me:*
SolrClient client = new HttpSolrClient.Builder("
http://localhost:8983/solr/shakespeare;).build();
SolrQuery query = new SolrQuery();
query.setRequestHandler("/select");
query.setQuery("text_entry:henry");
On 1/8/2018 10:23 AM, Deepak Goel wrote:
> *I am trying to search for documents in my collection (Shakespeare). The
> code is as follows:*
>
> SolrClient client = new HttpSolrClient.Builder("
> http://localhost:8983/solr/shakespeare;).build();
>
> SolrDocument doc = client.getById("2");
> *However
Got it . Thank You for your help
Deepak
"Please stop cruelty to Animals, help by becoming a Vegan"
+91 73500 12833
deic...@gmail.com
Facebook: https://www.facebook.com/deicool
LinkedIn: www.linkedin.com/in/deicool
"Plant a Tree, Go Green"
On Mon, Jan 8, 2018 at 11:48 PM, Deepak Goel
*Is this right?*
SolrClient client = new HttpSolrClient.Builder("
http://localhost:8983/solr/shakespeare/select;).build();
SolrQuery query = new SolrQuery();
query.setQuery("henry");
query.setFields("text_entry");
query.setStart(0);
queryResponse =
I think you are missing /query handler endpoint in the URL. Plus actual
search parameters.
You may try using the admin UI to build your queries first.
Regards,
Alex
On Jan 8, 2018 12:23 PM, "Deepak Goel" wrote:
> Hello
>
> *I am trying to search for documents in my
Hold it. "date", "tdate", "pdate" _are_ primitive types. Under the
covers date/tdate are just a tlong type, newer Solrs have a "pdate"
which is a point numeric type. All that these types do is some parsing
up front so you can send human-readable data (and get it back). But
under the covers it's
While you're generally right, in this case it might make sense to stick
to a primitive type.
I see "unixtime" as a technical information, probably from
System.currentTimeMillis(). As long as it's not used as a "real world"
date but only for sorting based on latest updates, or chosing which
There was time ago a Solr installation which had the same problem, and the
author explained me that the choice was made for performance reasons.
Apparently he was sure that handling everything as primitive types would
give a boost to the Solr searching/faceting performance.
I never agreed ( and
What Hoss said, and in addition somewhere some
custom code has to be translating things back and
forth. For dates, Solr wants -MM-DDTHH:MM:SSZ
as a date string it knows how to deal with. That simply
couldn't parse as a float type so there's some custom
code that transforms dates into a float
: Here is my question. In schema.xml, there is this field:
:
:
:
: Question: why is this declared as a float datatype? I'm just looking
: for an explanation of what is there – any changes come later, after I
: understand things better.
You would hvae to ask the creator of that
First use PatternReplaceCharFilterFactory. The difference is that
PatternReplaceCharFilterFactoryworks on the entire input whereas
PatternReplaceFilterFactory works only on the tokens emitted by the
tokenizer. Concrete example using WhitespeceTokenizerFactory would be
this [is some ] text
the /solr is a "chroot" -- if used, everything for solr goes into
zookeeper's /solr "directory"
It isn't required, but is very useful for keeping things separated. I use
it to handle different Solr versions for upgrading (/solr5_4 or /solr6_2)
If not used, everything you put into Zookeeper
Many Thanks! I will move this to a cloudera list.
On Wed, Sep 7, 2016 at 2:26 PM, Erick Erickson
wrote:
> Well, first off the ZK ensemble string is usually specified as
> dayrhegapd016.enterprisenet.org:2181,host2:2181,host3:2181/solr
> (note that the /solr is only at
Well, first off the ZK ensemble string is usually specified as
dayrhegapd016.enterprisenet.org:2181,host2:2181,host3:2181/solr
(note that the /solr is only at the end, not every node).
Second, I always get confused whether the /solr is necessary or not.
Again, though, the Cloudera user's list is
Thank Erik,
So seems like the problem is that when I upload the configs to zookeeper
and then inspect zookeeper-client and ls /solr/configs it is showing to be
empty.
I executed the following command to upload the config
solrctl --zk
I'm a bit rusty on solrctl (and you might get faster/more up-to-date
responses on the Cloudera lists). But to create a collection, you
first need to have uploaded the configs to Zookeeper, things like
schema.xml, solrconfig.xml etc. I forget
what the solrctl command is, but something like
Gonzalo,
Thanks for responding,
executed the parameters you suggested, it still shows me the same error.
Sincerely,
darshan
On Wed, Sep 7, 2016 at 1:13 PM, Gonzalo Rodriguez <
grodrig...@searchtechnologies.com> wrote:
> Hi Darshan,
>
> It looks like you are listing the instanceDir's name twice
Hi Darshan,
It looks like you are listing the instanceDir's name twice in the create
collection command, it should be
$ solrctl --zk host:2181/solr --solr host:8983/solr/ collection --create
Catalog_search_index -s 10 -c Catalog_search_index
Without the extra ". Catalog_search_index" at the
To pile on to Chris' comment. In the M/S situation
you describe, all the query traffic goes to the slave.
True, this relieves the slave from doing the work of
indexing, but it _also_ prevents the master from
answering queries. So going to SolrCloud trades
off indexing on _both_ machines to also
: I can see there is something called a "core" ... it appears there can be
: many cores for a single SOLR server.
:
: Can someone "explain like I'm five" -- what is a core?
https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml
"In Solr, the term core is used to refer to a
: The database of server 2 is considered the "master" and it is replicated
: regularly to server 1, the "slave".
:
: The advantage is the responsiveness of server 1 is not impacted with server
: 2 gets busy with lots of indexing.
:
: QUESTION: When deploying a SOLR 5 setup, do I set things up
On Wed, 2014-03-19 at 11:55 +0100, Colin R wrote:
We run a central database of 14M (and growing) photos with dates, captions,
keywords, etc.
We currently upgrading from old Lucene Servers to latest Solr running with a
couple of dedicated servers (6 core, 36GB, 500SSD). Planning on using
Hi Toke
Thanks for replying.
My question is really regarding index architecture. One big or many small
(with merged big ones)
We probably get 5-10K photos added each day. Others are updated, some are
deleted.
Updates need to happen quite fast (e.g. within minutes of our Databases
receiving
On Wed, 2014-03-19 at 13:28 +0100, Colin R wrote:
My question is really regarding index architecture. One big or many small
(with merged big ones)
One difference is that having a single index/collection gives you better
ranked searches within each collection. If you only use date/filename
Hi Toke
Our current configuration Lucene 2.(something) with RAILO/CFML app server.
10K drives, Quad Core, 16GB, Two servers. But the indexing and searching are
starting to fail and our developer is no longer with us so it is quicker to
rebuild than fix all the code.
Our existing config is lots
Oh My. 2(something) is ancient, I second your move
to scrap the current situation and start over. I'm
really curious what the _reason_ for such a complex
setup are/were.
I second Toke's comments. This is actually
quite small by modern Solr/Lucene standards.
Personally I would index them all to a
On 3/19/2014 4:55 AM, Colin R wrote:
My question is an architecture one.
These photos are currently indexed and searched in three ways.
1: The 14M pictures from above are split into a few hundred indexes that
feed a single website. This means index sizes of between 100 and 500,000
entries
: How do I achieve, add if not there, fail if duplicate is found. I though
You can use the optimistic concurrency features to do this, by including a
_version_=-1 field value in the document.
this will instruct solr that the update should only be processed if the
document does not already
A follow up question on this (as it is kind of new functionality).
What happens if several documents are submitted and one of them fails
due to that? Do they get rolled back or only one?
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn:
You need to use dynamicField not field, that's all :)
Erik
On Feb 20, 2013, at 4:06, Erik Dybdahl erik...@gmail.com wrote:
Hi,
I'm currently assessing lucene/solr as a search front end for documents
currently stored in an rdbms.
The data has been made searchable to clients, in a way so
On Wed, 2013-02-20 at 10:06 +0100, Erik Dybdahl wrote:
However, after definining
field name=customerField_* type=string indexed=true
stored=true multiValued=true/
Seems like a typo to me: You need to write dynamicField, not
field, when defining a dynamic field.
Regards,
Toke Eskildsen
Excellent, works like a charm!
Though embarassing, it's still a good thing the only problem was me being
blind :-)
Thank you, Toke and Erik.
On Wed, Feb 20, 2013 at 11:47 AM, Toke Eskildsen
t...@statsbiblioteket.dkwrote:
On Wed, 2013-02-20 at 10:06 +0100, Erik Dybdahl wrote:
However, after
Erick, I'll do that. Thank you very much.
Regards,
Jacek
On Tue, May 1, 2012 at 7:19 AM, Erick Erickson erickerick...@gmail.comwrote:
The easiest way is to do that in the app. That is, return the top
10 to the app (by score) then re-order them there. There's nothing
in Solr that I know of
The easiest way is to do that in the app. That is, return the top
10 to the app (by score) then re-order them there. There's nothing
in Solr that I know of that does what you want out of the box.
Best
Erick
On Mon, Apr 30, 2012 at 11:10 AM, Jacek pjac...@gmail.com wrote:
Hello all,
I'm facing
: If using CommonsHttpSolrServer query() method with parameter wt=json, when
: retrieving QueryResponse, how to do to get JSON result output stream ?
when you are using the CommonsHttpSolrServer level of API, the client
takes care of parsing the response (which is typically in an efficient
Hi Steve,
I've filed a new JIRA issue along with the patch, which can be found at
lt;https://issues.apache.org/jira/browse/LUCENE-3406gt;.
Please let me know if you see any problem.
Thanks!
-Sid
--
View this message in context:
I know I can have multi value on them but that doesn't let me see that
a showing instance happens at a particular time on a particular
channel, just that it shows on a range of channels at a range of times
Starting to think I will have to either store a formatted string that
combines them or keep
Hi sid,
The current source packaging scheme aims to *avoid* including local changes :),
so yes, there is no support currently for what you want to do.
Prior to https://issues.apache.org/jira/browse/LUCENE-2973, the source
packaging scheme used the current sources rather than pulling from
nope, it's not easy. Solr docs are flat, flat, flat with the tiny
exception that multiValued fields are returned as a lists.
However, you can count on multi-valued fields being returned
in the order they were added, so it might work out for you to
treat these as parallel arrays in Solr documents.
have come to that conclusion so had to choose between multiple fields with
multiple vales or a field with delimited text, gone for the former
On Thu, Aug 25, 2011 at 7:58 PM, Erick Erickson erickerick...@gmail.comwrote:
nope, it's not easy. Solr docs are flat, flat, flat with the tiny
Delimited text is the baby form of lists.
Text can be made very very structured (think XML, ontologies...).
I think the crux is your search needs.
For example, with Lucene, I made a search for formulæ (including sub-terms) by
converting the OpenMath-encoded terms into rows of tokens and querying
My search is very simple, mainly on titles, actors, show times and channels.
Having multiple lists of values is probably better for that, and as the
order is kept the same its relatively simple to map the response back onto
pojos for my presentation layer.
On Thu, Aug 25, 2011 at 8:18 PM, Paul
Whether multi-valued or token-streams, the question is search, not
(de)serialization: that's opaque to Solr which will take and give it to you as
needed.
paul
Le 25 août 2011 à 21:24, Zac Tolley a écrit :
My search is very simple, mainly on titles, actors, show times and channels.
Having
You could change starttime and channelname to multiValued=true and use
these fields to store all the values for those fields.
showing.movie_id and showing.id probably isn't needed in a solr record.
On 8/24/11 7:53 AM, Zac Tolley wrote:
I have a very scenario in which I have a film and
@lucene.apache.org
Cc: Robert Petersen
Subject: Re: Newbie question: how to deal with different # of search
results per page due to pagination then grouping
How do you know whether to provide a 'next' button, or whether you are
the end of your facet list?
On 6/1/2011 4:47 PM, Robert Petersen wrote
There's no great way to do that.
One approach would be using facets, but that will just get you the
author names (as stored in fields), and not the documents under it. If
you really only want to show the author names, facets could work. One
issue with facets though is Solr won't tell you the
Don't manually group by author from your results, the list will always
be incomplete... use faceting instead to show the authors of the books
you have found in your search.
http://wiki.apache.org/solr/SolrFacetingOverview
-Original Message-
From: beccax [mailto:bec...@gmail.com]
Sent:
, 2011 12:41 PM
To: solr-user@lucene.apache.org
Subject: Re: Newbie question: how to deal with different # of search
results per page due to pagination then grouping
There's no great way to do that.
One approach would be using facets, but that will just get you the
author names (as stored in fields
/solr/SimpleFacetParameters#facet.offset
-Original Message-
From: Jonathan Rochkind [mailto:rochk...@jhu.edu]
Sent: Wednesday, June 01, 2011 12:41 PM
To: solr-user@lucene.apache.org
Subject: Re: Newbie question: how to deal with different # of search
results per page due to pagination
Petersen
Subject: Re: Newbie question: how to deal with different # of search
results per page due to pagination then grouping
How do you know whether to provide a 'next' button, or whether you are
the end of your facet list?
On 6/1/2011 4:47 PM, Robert Petersen wrote:
I think facet.offset allows
]
Sent: Sunday, May 29, 2011 9:00 PM
To: solr-user@lucene.apache.org
Subject: Re: newbie question for DataImportHandler
This trips up a lot of folks. Sold just marks docs as deleted, the terms etc
are left in the index until an optimize is performed, or the segments are
merged. This latter isn't
This trips up a lot of folks. Sold just marks docs as deleted, the terms etc
are left in the index until an optimize is performed, or the segments are
merged. This latter isn't very predictable, so just do an optimize.
The docs aren't returned as results though.
Best
Erick
On May 24, 2011 10:22
Sounds like you might not be committing the delete. How are you deleting it?
If you run the data import handler with clean=true (which is the default) it
will delete the data for you anyway so you don't need to delete it yourself.
Hope that helps.
-Original Message-
From: antoniosi
Hi Lance and Gora,
Thanks for your support!
I have changed
fields
field name=Shop_artikel_rg type=string indexed=true
stored=true
/
field name=Artikel type=string indexed=true stored=true /
field name=Omschrijving type=string indexed=true stored=true /
/fields
Into
On Sat, 4 Sep 2010 01:15:11 -0700 (PDT)
BobG b...@bitwise-bncc.nl wrote:
Hi,
I am trying to set up a new SOLR search engine on a windows
platform. It seems like I managed to fill an index with the
contents of my SQL server table.
When I use the default *.* query I get a nice result:
More directly: if the 'Artikel' field is a string, only the whole
string will match:
Artikel:Kerstman baardstel.
Or you can use a wildcard: Kerstmann* or just Kerst*
If it is a text field, it is chopped into words and
q=Artikel:Kerstmann would work.
Gora Mohanty wrote:
On Sat, 4 Sep
You can append it in your middleware, or try the EdgeNGramTokenizer [1]. If
you're going for the latter, don't forget to reindex and expect a larger index.
[1]:
http://lucene.apache.org/java/2_9_0/api/all/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html
-Original message-
Add commit after the loop. I would advise to use commit in a separate
thread. I do keep separate timer thread, where every minute I will do
commit and at the end of every day I will optimize the index.
Regards
Aditya
www.findbestopensource.com
On Tue, Jun 1, 2010 at 2:57 AM, Steve Kuo
I would additionally suggest to use embeddedSolrServer for large uploads if
possible, performance are better.
2010/5/31 Steve Kuo kuosen...@gmail.com
I have a newbie question on what is the best way to batch add/commit a
large
collection of document data via solrj. My first attempt was to
: CommonsHttpSolrServer.request() resulting in multiple searchers. My first
: thought was to change the configs for autowarming. But after looking at the
: autowarm params, I am not sure what can be changed or perhaps a different
: approach is recommened.
even with 0 autowarming (which is what
Move the commit outside your loop and you'll be in better shape.
Better yet, enable autocommit in solrconfig.xml and don't commit from
your multithreaded client, otherwise you still run the risk of too
many commits happening concurrently.
Erik
On May 31, 2010, at 5:27 PM, Steve
dismax won't quite give you the same query result. What you can do
pretty easily, though, is create a QParser and QParserPlugin pair,
register it solrconfig.xml and then use defType=name registered.
Pretty straightforward. Have a look at Solr's various QParserPlugin
implementations for
What's the point of generating your own query?
Are you sure that solr query syntax cannot satisfy your need?
2010/1/29 Abin Mathew abin.mat...@toostep.com
Hi I want to generate my own customized query from the input string entered
by the user. It should look something like this
*Search field
Hi, I realized the power of Dismax Query Handler recently and now I
dont need to generate my own query since Dismax is giving better
results.Thanks a lot
2010/1/29 Wangsheng Mei hairr...@gmail.com:
What's the point of generating your own query?
Are you sure that solr query syntax cannot satisfy
Hello Shalin,
thaks you for your help. yes it answers my question.
Much appreciated
Shalin Shekhar Mangar wrote:
On Tue, May 12, 2009 at 9:48 PM, Wayne Pope
waynemailingli...@gmail.comwrote:
I have this request:
On Tue, May 12, 2009 at 9:48 PM, Wayne Pope waynemailingli...@gmail.comwrote:
I have this request:
http://localhost:8983/solr/select?start=0rows=20qt=dismaxq=copyhl=truehl.snippets=4hl.fragsize=50facet=truefacet.mincount=1facet.limit=8facet.field=typefq=company-id%3A1wt=javabinversion=2.2
Just an FYI: I've never tried, but there seems to be RSS feed sample in DIH:
http://wiki.apache.org/solr/DataImportHandler#head-e68aa93c9ca7b8d261cede2bf1d6110ab1725476
Koji
Tom H wrote:
Hi,
I've just downloaded solr and got it working, it seems pretty cool.
I have a project which needs to
: Is it possible to define more than one schema? I'm reading the example
: schema.xml. It seems that we can only define one schema? What about if I
: want to define one schema for document type A and another schema for
: document type B?
there are lots of ways to tackle a problem like this,
have two different cores and you can have separate schema for each.
On Thu, Jan 29, 2009 at 1:20 PM, Cheng Zhang zhangyongji...@yahoo.com wrote:
Hello,
Is it possible to define more than one schema? I'm reading the example
schema.xml. It seems that we can only define one schema? What about
I can help a bit with 2...
First, keep in mind the difference between index and query
time boosting:
From Hossman:
..Index time field bosts are a way to express things like
this documents title is worth twice as much as the title of most documents
query time boosts are a way to express i care
On Dec 3, 2008, at 11:53 AM, Sudarsan, Sithu D. wrote:
Hi All,
Using Lucene, index has been created. It has five different fields.
How to just use those index from SOLR for searching? I tried changing
the schema as in tutorial, and copied the index to the data directory,
but all searches
, November 25, 2008 8:29 PM
To: solr-user@lucene.apache.org
Subject: Re: newbie question on SOLR distributed searches with many shards
anything that is passed as a request parameter can be put into the
SearchHandlers defaults or invariants section .
This is equivalent to passing the shard url
PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Noble Paul
??? ??
Sent: Tuesday, November 25, 2008 8:29 PM
To: solr-user@lucene.apache.org
Subject: Re: newbie question on SOLR distributed searches with many shards
anything that is passed as a request parameter can be put
anything that is passed as a request parameter can be put into the
SearchHandlers defaults or invariants section .
This is equivalent to passing the shard url in the request
However this expects that you may need to setup a loadbalancer if a
shard hhos more than one host
On Wed, Nov 26, 2008
: Logging an error and returning successfully (without adding any docs) is
: still inconsistent with the way all other RequestHandlers work: fail the
: request.
:
: I know DIH isn't a typical RequestHandler, but some things (like failing
: on failure) seem like they should be a given.
:
On Mon, Nov 24, 2008 at 7:25 AM, Chris Hostetter
[EMAIL PROTECTED] wrote:
: Logging an error and returning successfully (without adding any docs) is
: still inconsistent with the way all other RequestHandlers work: fail the
: request.
:
: I know DIH isn't a typical RequestHandler, but
-integration/talend-open-studio.php
Thanks,
Lance
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Friday, November 21, 2008 8:12 PM
To: solr-user@lucene.apache.org
Subject: Re: Newbie Question - getting search results from dataimport request
handler
On Sat
: it might be worth considering a new @attribute for fields to indicate
: that they are going to be used purely as component fields (ie: your
: first-name/last-name example) and then have DIH pass all non-component
: fields along and error if undefined in the schema just like other updating
:
On Sat, Nov 22, 2008 at 3:10 AM, Chris Hostetter
[EMAIL PROTECTED] wrote:
: it might be worth considering a new @attribute for fields to indicate
: that they are going to be used purely as component fields (ie: your
: first-name/last-name example) and then have DIH pass all non-component
:
On Sat, Nov 15, 2008 at 6:33 AM, Chris Hostetter
[EMAIL PROTECTED] wrote:
: Is here a bug in DIH that caused these unrecognized fields to be ignored,
: or is it possible the errors were logged (by DUH2 maybe? ... it's been a
: while since i looked at the update code) but DIH didn't notice
: You need to modify the schema which came with Solr to suit your data. There
If i'm understanding this thread correctly, DIH ran successfully, docs
were created, some fields were stored and indexed (because they did exist
in the schema) but other fields the user was attempting to create
On Thu, Nov 13, 2008 at 3:52 AM, Chris Hostetter
[EMAIL PROTECTED]wrote:
: You need to modify the schema which came with Solr to suit your data.
There
If i'm understanding this thread correctly, DIH ran successfully, docs
were created, some fields were stored and indexed (because they did
On Thu, Nov 13, 2008 at 3:52 AM, Chris Hostetter
[EMAIL PROTECTED] wrote:
: You need to modify the schema which came with Solr to suit your data. There
If i'm understanding this thread correctly, DIH ran successfully, docs
were created, some fields were stored and indexed (because they did
you cannot query the DIH. It can only do indexing
after indexing you must do the indexing on the regular query interface
On Tue, Nov 11, 2008 at 9:45 AM, Kevin Penny [EMAIL PROTECTED] wrote:
My Question is: what is the format of a search that will return data?
i.e.
with sql data and not xml data.
Thanks
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Monday, November 10, 2008 10:18 PM
To: solr-user@lucene.apache.org
Subject: Re: Newbie Question - getting search results from dataimport request
handler
you cannot query
@lucene.apache.org
Subject: Re: Newbie Question - getting search results from dataimport request
handler
you cannot query the DIH. It can only do indexing
after indexing you must do the indexing on the regular query interface
On Tue, Nov 11, 2008 at 9:45 AM, Kevin Penny [EMAIL PROTECTED] wrote
is experimental. It is likely to change in the future.
/str
/response
Kevin
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Monday, November 10, 2008 10:30 PM
To: solr-user@lucene.apache.org
Subject: Re: Newbie Question - getting search results from
Kevin
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Monday, November 10, 2008 11:23 PM
To: solr-user@lucene.apache.org
Subject: Re: Newbie Question - getting search results from dataimport
request handler
search for *:* and see if the index indeed
:35 PM
To: solr-user@lucene.apache.org
Subject: Re: Newbie Question - getting search results from dataimport request
handler
Hi Kevin,
You need to modify the schema which came with Solr to suit your data. There
should be a schema.xml inside example/solr/conf directory. Once you do that,
re-import
1 - 100 of 114 matches
Mail list logo