If you are using a nightly you can try the new SolrReplication feature
http://wiki.apache.org/solr/SolrReplication
On Thu, Oct 23, 2008 at 4:32 AM, William Pierce [EMAIL PROTECTED] wrote:
Otis,
Yes, I had forgotten that Windows will not permit me to overwrite files
currently in use. So my
please go through this url once
http://lucene.apache.org/solr/tutorial.html
--Noble
On Thu, Oct 23, 2008 at 2:37 PM, Laxmilal Menaria [EMAIL PROTECTED] wrote:
Hello,
I have created index of my xml files, these all index files are located in
data/index folder. Now I have update the
It was committed on 10/21
take the latest 10/23 build
http://people.apache.org/builds/lucene/solr/nightly/solr-2008-10-23.zip
On Fri, Oct 24, 2008 at 2:27 AM, William Pierce [EMAIL PROTECTED] wrote:
I tried the nightly build from 10/18 -- I did the following:
a) I downloaded the nightly build
;
}
On Thu, Oct 23, 2008 at 4:54 PM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
please go through this url once
http://lucene.apache.org/solr/tutorial.html
--Noble
On Thu, Oct 23, 2008 at 2:37 PM, Laxmilal Menaria [EMAIL PROTECTED]
wrote:
Hello,
I have created index of my
On Thu, Oct 23, 2008 at 10:01 PM, Nick80 [EMAIL PROTECTED] wrote:
It was actually very easy. I followed the tutorial at
http://wiki.apache.org/solr/DataImportHandler . The only thing I forgot was
that I had to define the fields that I have in data-config.xml also in
solrconfig.xml. Another
You must have your entities nested like these
entity name=campaign
entity name=banner
entity name=size
/entity
/entity
/entity
banner and size must be multivalued
On Thu, Oct 23, 2008 at 11:29 PM, Nick80 [EMAIL PROTECTED] wrote:
I did some more testing and encountered
Also, when will solr's replication handler release in an official release?
Can it be released as a patch on 1.3? It is terribly useful functionality
and if there's a way to get it out sooner, I'd sure appreciate it!
It is a possiblity.You can raise a JIRA issue
The feature depends on some
probably u can paste your data-config.xml with the queries etc
--Noble
On Fri, Oct 24, 2008 at 1:33 PM, Nick80 [EMAIL PROTECTED] wrote:
Hi Paul,
thanks for the answer but unfortunately it doesn't work. I have the
following:
entity name=campaign
field name=id column=id /
field
On Fri, Oct 24, 2008 at 5:14 PM, [EMAIL PROTECTED] wrote:
Hello,
I have some questions about DataImportHandler and Solr statistics...
1.)
I'm using the DataImportHandler for creating my Lucene index from XML files:
###
$ cat data-config.xml
dataConfig
dataSource type=FileDataSource /
oh. There is nothing wrong with indexing or querying.
Solr cannot store or return a document like
arr name=banner_type
strflash
arr name=size
str50x50/str
str100x100/str
/arr
/str
strgif
arr name=size
str50x50/str
I've updated the documentation at http://wiki.apache.org/solr/Solrj
all these code is not necessary
((CommonsHttpSolrServer) server).setParser(new
BinaryResponseParser());
((CommonsHttpSolrServer) server).setParser(new XMLResponseParser());
((CommonsHttpSolrServer)
Are you sure you optimized the index?
It is useful only if your bandwidth is very low.
Otherwise the cost of copying/comprressing/decompressing can take up
more time than we save.
On Tue, Oct 28, 2008 at 2:49 AM, Simon Collins
[EMAIL PROTECTED] wrote:
Is there an option on the replication
I may be a bit off the mark. It seems that DataImportHandler may be
able to do this very easily for you.
http://wiki.apache.org/solr/DataImportHandler#jdbcdatasource
On Fri, Oct 24, 2008 at 6:28 PM, Simon Collins
[EMAIL PROTECTED] wrote:
Hi
We're running solr on a win 2k3 box under tomcat
It is useful only if your bandwidth is very low.
Otherwise the cost of copying/comprressing/decompressing can take up
more time than we save.
I mean compressing and transferring. If the optimized index itself has
a very high compression ratio then it is worth exploring the option
of
copy what is not already at the target. The Putty suite 'pscp' program also
has the compression feature.
Lance
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Monday, October 27, 2008 9:36 PM
To: solr-user@lucene.apache.org
Subject: Re
bitdocset does not take ~ 14M * sizeof(int) in memory
it may take a maximum of
14M/8 bytes in memory ~= 1.75MB
On Tue, Oct 28, 2008 at 6:06 PM, Jérôme Etévé [EMAIL PROTECTED] wrote:
Hi all,
In my code, I'd like to keep a subset of my 14M docs which is around
100k large.
What is
compression is standard in HTTP? --wunder
On 10/29/08 4:35 AM, Noble Paul നോബിള് नोब्ळ् [EMAIL PROTECTED]
wrote:
open a JIRA issue. we will use a gzip on both ends of the pipe . On
the slave
side you can say
str name=ziptruestr
as an extra option to compress and
send data from server
--Noble
Hoss,
You are partially right. Instead of the HTTP header , we use a request
parameter. (RequestHandlers cannot read HTP headers). If the param is
present it wraps the response in an zip outputstream. It is configured
in the slave because Every slave may not want compression. . Slaves
which are
hi ,
There are two sides to this .
1. indexing (getting data into Solr) SolrJ or DataImportHandler can be
used for this
2.querying . getting data out of solr. Here you do not have the choice
of joining multiple tables. There only one index for Solr
On Thu, Oct 30, 2008 at 5:34 PM, Raghunandan
-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 30, 2008 6:16 PM
To: solr-user@lucene.apache.org
Subject: Re: Using Solrj
hi ,
There are two sides to this .
1. indexing (getting data into Solr) SolrJ or DataImportHandler can be
used for this
2.querying
run full-import with clean=false
for full-import clean is set to true by default and for delta-import
clean is false by default.
On Fri, Oct 31, 2008 at 9:16 AM, Lance Norskog [EMAIL PROTECTED] wrote:
I have a DataImportHandler configured to index from an RSS feed. It is a
latest stuff feed.
to say that I need to create a view according to my query and
then index on the view and fetch?
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 30, 2008 6:16 PM
To: solr-user@lucene.apache.org
Subject: Re: Using Solrj
hi
on out of memory exception with both MySQL and MS SQL Server
drivers.
http://wiki.apache.org/solr/DataImportHandler#faq
On Thu, Jun 26, 2008 at 9:36 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
We must document this information in the wiki. We never had a chance
to play w/ ms sql
The parser is Stax. But the XPath implementation is custom. Certain
XPath features are hard to implement in streaming way
There is not documentation yet.
You can access attributes like /root/a/b/@a
attribute values can be checked like
/root/a/[EMAIL PROTECTED]/x or
/root/a/[EMAIL
If you wish to create 1 doc per inner entity the set
rootEntity=false for the entity outer.
The exception is because the url is wrong
On Sat, Nov 1, 2008 at 10:30 AM, Lance Norskog [EMAIL PROTECTED] wrote:
I wrote a nested HttpDataSource RSS poller. The outer loop reads an rss feed
which
Hi Jon ,
Using a CachedSqlEntityProcessor is the root entity is of no use. it
must be only as good as using a SqlEntityProcessor .for classes
belonging to the package 'org.apache.solr.handler.dataimport' the
package name can be omited (for better readability).
On Sun, Nov 2, 2008 at 8:08 AM, Jon
Hi Lance,
Do a full import w/o debug and let us know if my suggestion worked
(rootEntity=false ) . If it didn't , I can suggest u something else
(Writing a Transformer )
On Sun, Nov 2, 2008 at 8:13 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
If you wish to create 1 doc per inner
/update too (the
method is same getData()). The second entity can read from db and
create docs (see Jon baer's suggestion) using the
XPathEntityProcessor as a sub-entity
--Noble
On Mon, Nov 3, 2008 at 9:44 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
Hi Lance,
Do a full import w/o debug
=ScriptDataSource name=outerloop script=outerloop.js
/
(The script would basically contain just a callback - getData(String query)
that results in an array set or might set values on it's children, etc)
- Jon
On Nov 3, 2008, at 12:40 AM, Noble Paul നോബിള് नोब्ळ् wrote:
Hi Lance,
I guess I got
The attribute name is batchSize=-1 (it is case sensitive) . Tjis
ensures that Mysql driver fetcches row by row
http://wiki.apache.org/solr/DataImportHandlerFaq
On Mon, Nov 3, 2008 at 9:17 PM, sunnyfr [EMAIL PROTECTED] wrote:
Hi Shalin,
*
I would like to know if you just used batchsize = -1.
From the data-config.xml it is obvious that the your indexing will
take a lot of time. MySql has very poor join performance. It is not a
very good idea to run this on a production database.
I would suggest you to configure another mysql server and do mysql
replication to that and run the import
.
Thanks,
Lance
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 01, 2008 7:44 PM
To: solr-user@lucene.apache.org
Subject: Re: DIH Http input bug - problem with two-level RSS walker
If you wish to create 1 doc per inner entity
can you tell what exactly you wish to customize?
On Wed, Nov 5, 2008 at 10:46 AM, Muhammed Sameer [EMAIL PROTECTED] wrote:
Salaam,
I read somewhere that it is better to write a new start.jar file than use the
one that is provided within the example directory, can someone please guide
me
(sdoc.positionInResponse, doc);
}
Any idea?
L.M.
2008/11/5 Noble Paul നോബിള് नोब्ळ् [EMAIL PROTECTED]
the 'fl' parameter can be added to the defaults for your search
handler in solrconfig.xml
On Wed, Nov 5, 2008 at 3:22 PM, Luca Molteni [EMAIL PROTECTED] wrote:
Hello everybody
the 'fl' parameter can be added to the defaults for your search
handler in solrconfig.xml
On Wed, Nov 5, 2008 at 3:22 PM, Luca Molteni [EMAIL PROTECTED] wrote:
Hello everybody,
dealing with very large fields, let's say text documents, I found that there
is a global slowness (on my computer)
did you try w/o escaping the '' characters?
On Wed, Nov 5, 2008 at 11:48 PM, Ahmed Hammad [EMAIL PROTECTED] wrote:
Hi,
I am using Solr 1.3 data import handler. One of my table fields has html
tags, I want to strip it of the field text. So obviously I need the Regex
Transformer.
I added
The performance of DIH is likely to be faster than SolrJ. Because , it
does not have the overhead of an http request.
What is your data source? I am assuming it is xml. SolrJ cannot
directly index xml . You may need to read docs from xml before solrj
can index it.
--Noble
On Wed, Nov 5, 2008
On Thu, Nov 6, 2008 at 7:04 PM, Steven Anderson [EMAIL PROTECTED] wrote:
The performance of DIH is likely to be faster than SolrJ.
Because , it does not have the overhead of an http request.
Understood. However, we may not have the option of co-locating the data
to be injested with the Solr
On Fri, Nov 7, 2008 at 3:28 AM, souravm [EMAIL PROTECTED] wrote:
Hi,
Can I use multi core feature to have multiple indexes (That is each core
would take care of one type of index) within a single Solar instance ?
Yes .And this is why it is conceived
Will there be any performance impact due
Hi Lance,
This is one area we left open in DIH. What is the best way to handle
this. On error it should give up or continue with the next?
On Fri, Nov 7, 2008 at 12:44 AM, Lance Norskog [EMAIL PROTECTED] wrote:
You can also do streaming XML upload for the XML-based indexing. This can
feed,
On Fri, Nov 7, 2008 at 12:49 AM, Yonik Seeley [EMAIL PROTECTED] wrote:
Your problem is most likely the time it takes to facet on those
multi-valued fields.
Help is coming within the month I'd estimate, in the form of faster
faceting for multivalued fields where the number of values per
OK .you can raise an issue anyway
On Fri, Nov 7, 2008 at 7:03 PM, Steven Anderson [EMAIL PROTECTED] wrote:
Ideally, it would be a configuration option.
Also, it would be great to have a hook to log or process an exception.
Steve
-Original Message-
From: Noble Paul ??? ??
On Fri, Nov 7, 2008 at 5:48 PM, Vaijanath N. Rao [EMAIL PROTECTED] wrote:
Hi Solr-Users,
I am not sure but does there exist any mechanism where-in we can specify
solr as Batch and incremental indexing.
What I mean by batch indexing is solr would delete all the records which
existed in the
where ignoreerrors=break means that an error in Inner
#2 would prevent Inner #3.
Lance
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 06, 2008 8:39 PM
To: solr-user@lucene.apache.org
Subject: Re: Large Data Set Suggestions
Hi
You must let the tool identify the changed rows instead of providing a
select * from
see the section on more details
http://wiki.apache.org/solr/DataImportHandler#head-9ee74e0ad772fd57f6419033fb0af9828222e041
On Sun, Nov 9, 2008 at 1:23 AM, con [EMAIL PROTECTED] wrote:
Hi guys,
I have a
I'm not sure what kind of interfaces WordPress expose. Does it have a
DB/REST end point?
If so, it would be very easy to write a sample data-config.xml for wordpress.
--Noble
On Mon, Nov 10, 2008 at 8:13 PM, Grant Ingersoll [EMAIL PROTECTED] wrote:
I don't know of anyone that has done this,
you cannot query the DIH. It can only do indexing
after indexing you must do the indexing on the regular query interface
On Tue, Nov 11, 2008 at 9:45 AM, Kevin Penny [EMAIL PROTECTED] wrote:
My Question is: what is the format of a search that will return data?
i.e.
execution strings like:
http://localhost:8983/solr/select/?indent=onq=videosort=price+desc
etc however I'm working with sql data and not xml data.
Thanks
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Monday, November 10, 2008 10:18 PM
To: solr-user
is experimental. It is likely to change in the future.
/str
/response
Kevin
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Monday, November 10, 2008 10:30 PM
To: solr-user@lucene.apache.org
Subject: Re: Newbie Question - getting search results from
are you using SolrJ?. then look at
org.apache.solr.client.solrj.request.CoreAdminRequest
Even if you are not using SolrJ the same commands can be issued over http
On Tue, Nov 11, 2008 at 12:00 PM, RaghavPrabhu [EMAIL PROTECTED] wrote:
Hi all,
I want to create dynamic cores in my app.
notation as in the query attribute (e.g. ${parententityname.fieldname} ) to
access fields from parent entities, which allows them to merge data from
multiple related rows, not just different columns.
-Mauricio
On Mon, Nov 10, 2008 at 8:27 PM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED
why is the id field multivalued? is there a uniqueKey in the schema ?
Are you sure there are no duplicates?
look at the status host:post/dataimport gives you the status
it can give you some clue
--Noble
On Wed, Nov 12, 2008 at 4:53 AM, Giri [EMAIL PROTECTED] wrote:
Hi,
I have about ~ 2
DIH can delete rows from the index. look at the 'deletedPkQuery' option .
http://wiki.apache.org/solr/DataImportHandler#head-70d3fdda52de9ee4fdb54e1c6f84199f0e1caa76
Deleting from the DB is not possible for DIH . but you can write a
transformer or Entityprocessor which can do that.
On Wed, Nov
, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
why is the id field multivalued? is there a uniqueKey in the schema ?
Are you sure there are no duplicates?
look at the status host:post/dataimport gives you the status
it can give you some clue
--Noble
On Wed, Nov 12, 2008 at 4:53 AM
On Thu, Nov 13, 2008 at 3:52 AM, Chris Hostetter
[EMAIL PROTECTED] wrote:
: You need to modify the schema which came with Solr to suit your data. There
If i'm understanding this thread correctly, DIH ran successfully, docs
were created, some fields were stored and indexed (because they did
/ --
/schema
-
On Wed, Nov 12, 2008 at 11:01 PM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
the fact that it got committed
On Sat, Nov 15, 2008 at 6:33 AM, Chris Hostetter
[EMAIL PROTECTED] wrote:
: Is here a bug in DIH that caused these unrecognized fields to be ignored,
: or is it possible the errors were logged (by DUH2 maybe? ... it's been a
: while since i looked at the update code) but DIH didn't notice
Is this issue visible for consistently ? I mean are you able to
reproduce this easily?
On Fri, Nov 14, 2008 at 11:15 PM, William Pierce [EMAIL PROTECTED] wrote:
Folks:
I am using the nightly build of 1.3 as of Oct 23 so as to use the replication
handler. I am running on windows 2003 server
Any update processor can be used with DIH . First of all you may
register your dedupe update processor as you do now. You can either
pass the update.processor is the request parameter pr you can keep the
it in the 'defaults' of datataimport handler
str name=update.processordedupe/str
On Mon,
On Thu, Nov 13, 2008 at 10:43 PM, sunnyfr [EMAIL PROTECTED] wrote:
Hi everybody,
I don't get really when do I have to re index datas or not.
I did a full import but I realised I stored too many fields which I don't
need.
So I have to change some fields inedexed which are stored to not
nope . It is not possible as of now. the placeholders are not aware of
the core properties.
Is it possible to pass the values as request params? Request
parameters can be accessed .
You can raise an issue and we can address this separately
On Mon, Nov 17, 2008 at 7:57 PM, [EMAIL PROTECTED]
On Tue, Nov 18, 2008 at 2:49 AM, Ahmed Hammad [EMAIL PROTECTED] wrote:
Hi All,
Although the HTMLStripStandardTokenizerFactory will remove HTML tags, it
will be stored in the index and needed to be removed while searching. In my
case the HTML tags has no need at all. So I created
If the user is using the new java Solr replication then he can get rid
of the /update and /update/csv handlers altogether. So the slaves are
completely read-only
--Noble
On Tue, Nov 18, 2008 at 2:14 AM, Sean Timm [EMAIL PROTECTED] wrote:
I believe the Solr replication scripts require POSTing a
How are you indexing the data ? by posting xml? or using DIH?
On Tue, Nov 18, 2008 at 3:53 PM, con [EMAIL PROTECTED] wrote:
Hi Guys
I have timestamp fields in my database in the format,
ddmmyyhhmmss.Z AM
eg: 26-05-08 10:45:53.66100 AM
But I think the since the solr date format is
Hi Glen ,
You can post all the queries first on solr-dev and all the valid ones
can be moved to JIRA
thanks,
Noble
On Wed, Nov 19, 2008 at 3:26 AM, Glen Newton [EMAIL PROTECTED] wrote:
Yes, I've found it.
Do you want my comments here or in solr-dev or on jira?
Glen
2008/11/18 Shalin
Thanks gistolero.
I have added this to the FAQ
http://wiki.apache.org/solr/DataImportHandlerFaq
On Wed, Nov 19, 2008 at 2:34 AM, [EMAIL PROTECTED] wrote:
Very cool :-)
Both suggestions work fine! But only with solr version 1.4:
https://issues.apache.org/jira/browse/SOLR-823
Use a nightly
at the DateFormatTransformer. You can find documentation on
the
DataImportHandler wiki.
http://wiki.apache.org/solr/DataImportHandler
On Tue, Nov 18, 2008 at 10:41 PM, con [EMAIL PROTECTED] wrote:
Hi Noble,
I am using DIH.
Noble Paul നോബിള് नोब्ळ् wrote:
How are you indexing the data
Hi John,
it could probably not the expected behavior?
only 'explicit' fields must be case-sensitive.
Could you tell me the usecase or can you paste the data-config?
--Noble
On Thu, Nov 20, 2008 at 8:55 AM, Jon Baer [EMAIL PROTECTED] wrote:
Sorry I should have mentioned this is from using
unfortunately native JS objects are not handled by the ScriptTransformer yet.
but what you can do in the script is create a new
java.util.ArrayList() and add each item into that .
some thing like
var jsarr = ['term','term','term']
var arr = new java.util.ArrayList();
for each in jsarr...
will introspect the fields @ load time?
- Jon
On Nov 19, 2008, at 11:11 PM, Noble Paul നോബിള് नोब्ळ् wrote:
Hi John,
it could probably not the expected behavior?
only 'explicit' fields must be case-sensitive.
Could you tell me the usecase or can you paste the data-config?
--Noble
Setup an extra filter before SolrDispatchFilter to do authentication.
On Thu, Nov 20, 2008 at 12:28 PM, RaghavPrabhu [EMAIL PROTECTED] wrote:
Hi all,
Im using multiple cores and all i need to do is,to make the each core in
secure manner. If i am accessing the particular core via url,it
On Sat, Nov 22, 2008 at 3:10 AM, Chris Hostetter
[EMAIL PROTECTED] wrote:
: it might be worth considering a new @attribute for fields to indicate
: that they are going to be used purely as component fields (ie: your
: first-name/last-name example) and then have DIH pass all non-component
:
On Mon, Nov 24, 2008 at 7:25 AM, Chris Hostetter
[EMAIL PROTECTED] wrote:
: Logging an error and returning successfully (without adding any docs) is
: still inconsistent with the way all other RequestHandlers work: fail the
: request.
:
: I know DIH isn't a typical RequestHandler, but
which version of DIH are you using?
On Tue, Nov 25, 2008 at 5:24 PM, Joel Karlsson [EMAIL PROTECTED] wrote:
Hello,
I get Unknown field error when I'm indexing an Oracle dB. I've reduced the
number of fields/columns in order to troubleshoot. If I change the uniqeKey
to timestamp (for example)
every row emitted by an outer entity results in a new Sql query in the
inner entity. (yes 50 queries on inner entity)So,if you wish to
join multiple tables then nested entities is the way to go.
CachedSqlEntityProcessor is meant to help you reduce the number of
queries fired on sub-entities.
dataset sizes that have been tested using this framework and
what are some performance metrics?
Thanks again
Amit
On Tue, Nov 25, 2008 at 7:32 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
every row emitted by an outer entity results in a new Sql query in the
inner entity. (yes
anything that is passed as a request parameter can be put into the
SearchHandlers defaults or invariants section .
This is equivalent to passing the shard url in the request
However this expects that you may need to setup a loadbalancer if a
shard hhos more than one host
On Wed, Nov 26, 2008
. I need to update name field =
new where name=old.
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 26, 2008 3:56 PM
To: solr-user@lucene.apache.org
Subject: Re: Deleting indices
yes
On Wed, Nov 26, 2008 at 3:19 PM
yes
On Wed, Nov 26, 2008 at 3:19 PM, Raghunandan Rao
[EMAIL PROTECTED] wrote:
Hi,
I need to update an index after I update data in the DB. I will first
update the DB and then call deleteByQuery in Solrj and then update
particular index. What happens to deleteByQuery method if there are
some questions...
Noble Paul നോബിള് नोब्ळ् wrote:
On Tue, Nov 25, 2008 at 11:35 PM, Amit Nithian [EMAIL PROTECTED] wrote:
2) In the example, there were two use cases, one that is like
query=select
* from Y where xid=${X.ID} and another where it's query=select * from
Y
where=xid=${x.ID
I suspect only one thing
are the data types same for productid and product.id
in the db?
On Wed, Nov 26, 2008 at 5:38 PM, Steffen B. [EMAIL PROTECTED] wrote:
Hi Noble Paul,
thanks for your quick response.
Noble Paul നോബിള് नोब्ळ् wrote:
What i expect to happen is when you run the query
I am raising an issue for better error checking in CachedSqlEntityprocessor
https://issues.apache.org/jira/browse/SOLR-884
On Wed, Nov 26, 2008 at 9:34 PM, Steffen B. [EMAIL PROTECTED] wrote:
Noble Paul നോബിള് नोब्ळ् wrote:
I suspect only one thing
are the data types same for productid
https://issues.apache.org/jira/secure/attachment/12394070/sslogo-solr-finder2.0.png
https://issues.apache.org/jira/secure/attachment/12394165/solr-logo.png
https://issues.apache.org/jira/secure/attachment/12394266/apache_solr_b_red.jpg
Look at the file
http://svn.apache.org/viewvc/lucene/solr/trunk/example/solr/conf/solrconfig.xml?revision=720502view=markup
and take a look at the
line
requestHandler name=standard class=solr.SearchHandler default=true
you may see the defaults there.
you can add your param just the way the
The extension points are well documented as Entityprocessor,
DataSource and Transformers
Adding fieldboost is a planned item.
It must work as follows .
Add a special value $fieldBoost.fieldname to the row map
And DocBuilder should respect that. You can raise a bug and we can
commit it soon.
On Sat, Nov 29, 2008 at 7:26 PM, Jon Baer [EMAIL PROTECTED] wrote:
HadoopEntityProcessor for the DIH?
Reading data from Hadoop with DIH could be really cool
There are a few very useful ones which are required badly. Most useful
one would be a TikaEntityProcessor.
But I do not see it solving the
In the end lucene stores stuff as strings.
Even if you do store your data as map FieldType , Solr May not be able
to treat it like a map.
So it is fine to put is the map as one single string
On Mon, Dec 1, 2008 at 10:07 PM, Stephane Bailliez [EMAIL PROTECTED] wrote:
Hi all,
I'm looking for
Hi Joel,
DIH does not translate Clob automatically to text.
We can open that as an issue.
meanwhile you can write a transformer of your own to read Clob and
convert to text.
http://wiki.apache.org/solr/DataImportHandler#head-4756038c418ab3fa389efc822277a7a789d27688
On Tue, Dec 2, 2008 at 2:57
I have raised a new issue
https://issues.apache.org/jira/browse/SOLR-891
On Tue, Dec 2, 2008 at 9:54 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
Hi Joel,
DIH does not translate Clob automatically to text.
We can open that as an issue.
meanwhile you can write a transformer of your
On Tue, Dec 2, 2008 at 3:01 PM, Marc Sturlese [EMAIL PROTECTED] wrote:
Hey there,
I have my dataimporthanlder almost completely configured. I am missing three
goals. I don't think I can reach them just via xml conf or transformer and
sqlEntitProcessor plugin. But need to be sure of that.
If
());
}
}
}
return row;
}
}
// Joel
2008/12/2 Noble Paul നോബിള് नोब्ळ् [EMAIL PROTECTED]
Hi Joel,
DIH does not translate Clob automatically to text.
We can open that as an issue.
meanwhile you can write a transformer of your own to read Clob and
convert to text
(); }
row.put(columnName, strOut.toString());
}
}
}
return row;
}
}
// Joel
2008/12/2 Noble Paul നോബിള് नोब्ळ् [EMAIL PROTECTED]
Hi Joel,
DIH does not translate Clob automatically to text.
We can open
control
I am in the correct direction?
Sorry for my englis and thanks in advance
Noble Paul നോബിള് नोब्ळ् wrote:
On Tue, Dec 2, 2008 at 3:01 PM, Marc Sturlese [EMAIL PROTECTED]
wrote:
Hey there,
I have my dataimporthanlder almost completely configured. I am missing
three
goals. I don't
wojtek, you can report back the numbers if possible
It would be nice to know how the new impl performs in real-world
On Tue, Dec 2, 2008 at 11:45 PM, Yonik Seeley [EMAIL PROTECTED] wrote:
On Tue, Dec 2, 2008 at 1:10 PM, wojtekpia [EMAIL PROTECTED] wrote:
Is there a configurable way to switch
delta-import file?
On Wed, Dec 3, 2008 at 12:08 AM, Lance Norskog [EMAIL PROTECTED] wrote:
Does the DIH delta feature rewrite the delta-import file for each set of
rows? If it does not, that sounds like a bug/enhancement.
Lance
-Original Message-
From: Noble Paul നോബിള് नोब्ळ्
will start indexing from the
last doc that was indexed in the previous indexation. But I am still a bit
confused about how to do that...
Noble Paul നോബിള് नोब्ळ् wrote:
delta-import file?
On Wed, Dec 3, 2008 at 12:08 AM, Lance Norskog [EMAIL PROTECTED] wrote:
Does the DIH delta feature
] wrote:
That's what I am trying to do. Thanks for the advice. Once I have it done I
will rise the issue and upload the patch.
Noble Paul നോബിള് नोब्ळ् wrote:
OK . I guess I see it. I am thinking of exposing the writes to the
properties file via an API.
say Context#persist(key,value
Did you look at the DataImportHandler
http://wiki.apache.org/solr/DataImportHandler
On Wed, Dec 3, 2008 at 4:29 PM, Neha Bhardwaj
[EMAIL PROTECTED] wrote:
I have just starting using solr and with the help of documentation
available I can't figure out if Is there any way with which I can
you have to restart the server
You may also need to re-index the data if the changes are incompatible
On Thu, Dec 4, 2008 at 3:09 PM, Neha Bhardwaj
[EMAIL PROTECTED] wrote:
Hi,
Every time I make any change in schema , I have to restart the server. Is
this because I have made some mistake or
Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 04, 2008 3:12 PM
To: solr-user@lucene.apache.org
Subject: Re: changing schema is dynamic or not
you have to restart the server
You may also need to re-index the data if the changes are incompatible
On Thu, Dec 4, 2008 at 3
301 - 400 of 987 matches
Mail list logo