Hi Shawn,
I am new to solr and I have set up a cloud cluster of 1 shard and 3
collections one 2 servers. I am facing the same issue. I am using
CloudSolrClient client = new
CloudSolrClient.Builder(zkUrls,Optional.empty()).build(), to create my
client.
and then I fire import command using,
clien
On 12/6/2017 1:38 AM, Mahmoud Almokadem wrote:
> I'm already using the admin UI and get URL for fetching the status of
> dataimporter from network console and tried it outside the admin UI. Admin
> UI have the same behavior, when I pressed on execute the status messages
> are swa
Thanks Shawn,
I'm already using the admin UI and get URL for fetching the status of
dataimporter from network console and tried it outside the admin UI. Admin
UI have the same behavior, when I pressed on execute the status messages
are swapped between "not started", "
On 12/3/2017 9:27 AM, Mahmoud Almokadem wrote:
We're facing an issue related to the dataimporter status on new Admin UI
(7.0.1).
Calling to the API
http://solrip/solr/collection/dataimport?_=1512314812090&command=status&indent=on&wt=json
returns different status despite the im
We're facing an issue related to the dataimporter status on new Admin UI
(7.0.1).
Calling to the API
http://solrip/solr/collection/dataimport?_=1512314812090&command=status&indent=on&wt=json
returns different status despite the importer is running
The messages are swapped betw
or could i use a filter in schema.xml where i define a fieldtype and use some
filter that understands xpath?
On 4. Sep 2013, at 11:52 AM, Shalin Shekhar Mangar wrote:
> No that wouldn't work. It seems that you probably need a custom
> Transformer to extract the right div content. I do not know i
No that wouldn't work. It seems that you probably need a custom
Transformer to extract the right div content. I do not know if
TikaEntityProcessor supports such a thing.
On Wed, Sep 4, 2013 at 12:38 PM, Andreas Owen wrote:
> so could i just nest it in a XPathEntityProcessor to filter the html or
so could i just nest it in a XPathEntityProcessor to filter the html or is
there something like xpath for tika?
but now i dont know how to pass the text to tika, what do i put in url and
datasou
I don't know much about Tika but in the example data-config.xml that
you posted, the "xpath" attribute on the field "text" won't work
because the xpath attribute is used only by a XPathEntityProcessor.
On Thu, Aug 29, 2013 at 10:20 PM, Andreas Owen wrote:
> I want tika to only index the content i
I want tika to only index the content in ... for the
field "text". unfortunately it's indexing the hole page. Can't xpath do this?
data-config.xml:
http://127.0.0.1/tkb/internet/docImportUrl.xml"; forEach="/docs/doc"
dataSource="main">
i changed following line (xpath):
On 22. Aug 2013, at 10:06 PM, Alexandre Rafalovitch wrote:
> Ah. That's because Tika processor does not support path extraction. You
> need to nest one more level.
>
> Regards,
> Alex
> On 22 Aug 2013 13:34, "Andreas Owen" wrote:
>
>> i can do it like th
ok but i'm not doing any path extraction, at least i don't think so.
htmlMapper="identity" isn't preserving html
it's reading the content of the pages but it's not putting it into "text_test"
and "text". it's only in "text_test" the copyField isn't working.
data-config.xml:
Ah. That's because Tika processor does not support path extraction. You
need to nest one more level.
Regards,
Alex
On 22 Aug 2013 13:34, "Andreas Owen" wrote:
> i can do it like this but then the content isn't copied to text. it's just
> in text_test
>
> url="${rec.path}${rec.file}" dataS
i can do it like this but then the content isn't copied to text. it's just in
text_test
On 22. Aug 2013, at 6:12 PM, Andreas Owen wrote:
> i put it in the tika-entity as attribute, but it doesn't change anything. my
> bigger concern is why text_test isn't populated at all
i put it in the tika-entity as attribute, but it doesn't change anything. my
bigger concern is why text_test isn't populated at all
On 22. Aug 2013, at 5:27 PM, Alexandre Rafalovitch wrote:
> Can you try SOLR-4530 switch:
> https://issues.apache.org/jira/browse/SOLR-4530
>
> Specifically, setti
Can you try SOLR-4530 switch:
https://issues.apache.org/jira/browse/SOLR-4530
Specifically, setting htmlMapper="identity" on the entity definition. This
will tell Tika to send full HTML rather than a seriously stripped one.
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn:
i'm trying to index a html page and only user the div with the id="content".
unfortunately nothing is working within the tika-entity, only the standard text
(content) is populated.
do i have to use copyField for test_text to get the data?
or is there a problem with the entity-h
erthoughts.com/
>>>> LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
>>>> - Time is the quality of nature that keeps events from happening all at
>>>> once. Lately, it doesn't seem to be working. (Anonymous - via GTD book)
>>>>
>>>
t;> - Time is the quality of nature that keeps events from happening all at
>>> once. Lately, it doesn't seem to be working. (Anonymous - via GTD book)
>>>
>>>
>>> On Fri, Jul 19, 2013 at 12:09 PM, Andreas Owen wrote:
>>>
>>>> i'm
ll at
> > once. Lately, it doesn't seem to be working. (Anonymous - via GTD book)
> >
> >
> > On Fri, Jul 19, 2013 at 12:09 PM, Andreas Owen wrote:
> >
> >> i'm using solr 4.3 which i just downloaded today and am using only jars
> >> that c
ay and am using only jars
>> that came with it. i have enabled the dataimporter and it runs without
>> error. but the field "path" (included in schema.xml) and "text" (file
>> content) aren't indexed. what am i doing wrong?
>>
>> solr-path: C:\Cold
i'm using solr 4.3 which i just downloaded today and am using only jars that
came with it. i have enabled the dataimporter and it runs without error. but
the field "path" (included in schema.xml) and "text" (file content) aren't
indexed. what am i doing wron
with it. i have enabled the dataimporter and it runs without
> error. but the field "path" (included in schema.xml) and "text" (file
> content) aren't indexed. what am i doing wrong?
>
> solr-path: C:\ColdFusion10\cfusion\jetty-new
> collection-path: C:\ColdFusio
---Original Message- From: Andreas Owen
> Sent: Sunday, July 14, 2013 3:07 PM
> To: solr-user@lucene.apache.org
> Subject: Re: solr autodetectparser tikaconfig dataimporter error
>
> hi
>
> is there nowone with a idea what this error is or even give me a pointer
> where t
: Re: solr autodetectparser tikaconfig dataimporter error
hi
is there nowone with a idea what this error is or even give me a pointer
where to look? If not is there a alternitave way to import documents from a
xml-file with meta-data and the filename to parse?
thanks for any help.
On 12. Jul 2013, a
ndler.dataimport.DocBuilder.buildDocument(DocBuilder.jav=
> a:622)
> at
> =
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:2=
> 68)
> at
> =
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:187)=
>
>
at
=
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:2=
68)
at
=
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:187)=
at
=
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.=
java:359)
valid email but when I run
> in url http://localhost:8983/solr/mail/dataimport?command=full-import It
> said cannot access mail/dataimporter reason: no found. But when i run
> http://localhost:8983/solr/rss/dataimport?command=full-import<http://localhost:8983/solr/mail/dataimport?c
Hi,
I want to index emails using solr. I put the user name, password, hostname
in data-config.xml under mail folder. This is a valid email but when I run
in url http://localhost:8983/solr/mail/dataimport?command=full-import It
said cannot access mail/dataimporter reason: no found. But when i
ime, like this:
>>
>> value2
>>
>> Regards
>> Stefan
>>
>> On Thu, Mar 31, 2011 at 11:39 AM, kun xiong wrote:
>> > Since the interface of DataImporter return a Map, I can't put multi
>> value
>> > for a same field, r
theis
> Kun,
>
> it should be enough to use the same field second time, like this:
>
> value2
>
> Regards
> Stefan
>
> On Thu, Mar 31, 2011 at 11:39 AM, kun xiong wrote:
> > Since the interface of DataImporter return a Map, I can't put multi value
> >
Kun,
it should be enough to use the same field second time, like this:
value2
Regards
Stefan
On Thu, Mar 31, 2011 at 11:39 AM, kun xiong wrote:
> Since the interface of DataImporter return a Map, I can't put multi value
> for a same field, right?
>
> Example:
>
> I
Since the interface of DataImporter return a Map, I can't put multi value
for a same field, right?
Example:
I write a class extending DataImporter, and want to index {"value1",
"value2"} for field "name".
How should I do?
Many thanks.
Kun
The basic Solr document ingestion process does not currently support this.
In the DataImportHandler you can configure it to skip any document
that fails a processor. You would have to write your own processor
that hunts for that word and throws an exception.
On Sat, Feb 26, 2011 at 2:12 PM, Rosa
Hi,
Is there a way to drop document when indexing based of a blacklist
keyword list?
Something like the stopwords.txt...
But in this case when one keyword is detected in a specific field at
indexing, the whole doc would be skipped.
Regards
ebruary 9, 2011 3:50:05 PM
> Subject: communication between entity processor and solr DataImporter
>
> Hi,
>
> I'd like to communicate errors between my entity processor and the
> DataImporter
> in case of error.
>
> Should there be an error in my entity processor, I'
solr DataImporter
Hi,
I'd like to communicate errors between my entity processor and the DataImporter
in case of error.
Should there be an error in my entity processor, I'd like the index build to
rollback. How can I do this?
I want to throw an exception of some sort. Only thing I
Hi,
I'd like to communicate errors between my entity processor and the DataImporter
in case of error.
Should there be an error in my entity processor, I'd like the index build to
rollback. How can I do this?
I want to throw an exception of some sort. Only thing I can think of is t
On Mon, Oct 25, 2010 at 10:12 AM, Dario Rigolin
wrote:
> Looking at DataImporter I'm not sure if it's possible to import using a
> standard ... xml document representing a document add operation.
> Generating is quite expensive in my application and I have
> cached
> a
Looking at DataImporter I'm not sure if it's possible to import using a
standard ... xml document representing a document add operation.
Generating is quite expensive in my application and I have cached
all those documents into a text column into MySQL database.
It will be easier
ecleare for the preImportDeleteQuery.
Thanks for your time.
-Original Message-
From: Bilgin Ibryam [mailto:bibr...@gmail.com]
Sent: mercoledì 14 luglio 2010 14.46
To: solr-user@lucene.apache.org
Subject: Re: DataImporter
Is it possible that you have the same IDs in both entities?
Could you show h
Is it possible that you have the same IDs in both entities?
Could you show here your entity mappings?
Bilgin Ibryam
On Wed, Jul 14, 2010 at 11:48 AM, Amdebirhan, Samson, VF-Group <
samson.amdebir...@vodafone.com> wrote:
> Hi all,
>
>
>
> Can someone help me in this ?
>
>
>
> Importing 2 differen
Hi all,
Can someone help me in this ?
Importing 2 different entities one by one (specifying through the entity
parameter) why is the second import deleting the previous created index
for first entity and vice-versa?
The documentation provided by the solr website reports that :
"enti
> I dont know what pattern the user will configure the
> columns in a separate
> table.i have to read this table to map the solr-fields to
> these columns ,so
> i cant give dynamic fields also,and Transformers also seems
> to be no use in
> this case.
>
You don't need to know columns names. You
Whats the best way to get to the instance of DataImport handler from the
current context?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/DataImporter-from-context-tp825517p825517.html
Sent from the Solr - User mailing list archive at Nabble.com.
solution
--
View this message in context:
http://lucene.472066.n3.nabble.com/Customized-Solr-DataImporter-tp823556p825428.html
Sent from the Solr - User mailing list archive at Nabble.com.
> HI,
> I want to map my solr-fields using the Customized
> DataImport Handler
>
> For ex:
>
> I have a fields called
> />
>
>
> Actually my column-names comes dynamically from another
> table it varies from
> client to client.
> instead of giving the Mapped-Db-columns as 'NAME' i
> w
igure this
dynamically using the Customized Import Handler.
can i use My Own DataImportHandler To implement this.
Please help me .
Thanks in advance
--
View this message in context:
http://lucene.472066.n3.nabble.com/Customized-Solr-DataImporter-tp823556p823556.html
Sent from the Solr - User mailing li
DIH keeping some static that prevents it from running across two
>>> cores separately? if so, that'd be a bug.
>>>
>>> Erik
>>>
>>> On Mar 3, 2010, at 4:12 AM, stocki wrote:
>>>
>>>>
>>>> pleeease help me somebody =
p me somebody =( :P
>>>
>>>
>>>
>>>
>>> stocki wrote:
>>>>
>>>> Hello again ;)
>>>>
>>>> i install tomcat5.5 on my debian server ...
>>>>
>>>> i use 2 cores and two d
t Index, one for the
>>> normal search-feature and the other core for the suggest-feature.
>>>
>>> but i cannot start both DIH with an import command at the same
>>> time. how
>>> it this possible ?
>>>
>>>
>>>
sage in context:
http://old.nabble.com/2-Cores%2C-1-Table%2C-2-DataImporter---%3E-Import-at-the-same-time---tp27756255p27765825.html
Sent from the Solr - User mailing list archive at Nabble.com.
but i cannot start both DIH with an import command at the same time. how
> it this possible ?
>
>
> thx
>
--
View this message in context:
http://old.nabble.com/2-Cores%2C-1-Table%2C-2-DataImporter---%3E-Import-at-the-same-time---tp27756255p27765825.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello again ;)
its possible to import with two cores from one Table in database at the same
time ?
thx
--
View this message in context:
http://old.nabble.com/2-Cores%2C-1-Table%2C-2-DataImporter---%3E-Import-at-the-same-time---tp27756255p27756255.html
Sent from the Solr - User mailing list
;>>> Here is the complete error message:
>>>>> "
>>>>> type Status report
>>>>>
>>>>> message Severe errors in solr configuration. Check your log files for
>>>>> more
>>>>> detailed
wrong. If you want solr to continue
>>>> after configuration errors, change:
>>>> false in null
>>>> -
>>>> org.apache.solr.common.SolrException: FATAL: Could not create i
s in solr configuration. Check your log files for
>>> more
>>> detailed information on what may be wrong. If you want solr to continue
>>> after configuration errors, change:
>>> false in null
>>> ---
r to continue
>> after configuration errors, change:
>> false in null
>> -
>> org.apache.solr.common.SolrException: FATAL: Could not create importer.
>> DataImporter config invalid at
>> org.apache.solr.handler.dataimport.DataImportHandler.inform(DataImportHan
n on what may be wrong. If you want solr to continue
> after configuration errors, change:
> false in null
> -
> org.apache.solr.common.SolrException: FATAL: Could not create impo
null
-
org.apache.solr.common.SolrException: FATAL: Could not create importer.
DataImporter config invalid at
org.apache.solr.handler.dataimport.DataImportHandler.inform(DataImportHandler.java:114)
at
org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:31
Is this the complete stacktrace ? or is there anything else that is missing?
On Thu, Jul 9, 2009 at 3:14 PM, gateway0 wrote:
>
> Hi,
>
> I wanted to port my windows installation of solr to mac os. But the
> following error occured:
>
> "Could not create importer. Data
ong w/ your data-config.xml you can paste it here
>
> On Thu, Jul 9, 2009 at 3:14 PM, gateway0 wrote:
>>
>> Hi,
>>
>> I wanted to port my windows installation of solr to mac os. But the
>> following error occured:
>>
>
there is something wrong w/ your data-config.xml you can paste it here
On Thu, Jul 9, 2009 at 3:14 PM, gateway0 wrote:
>
> Hi,
>
> I wanted to port my windows installation of solr to mac os. But the
> following error occured:
>
> "Could not create importer. Data
Hi,
I wanted to port my windows installation of solr to mac os. But the
following error occured:
"Could not create importer. DataImporter config invalid at
org.apache.solr.handler.dataimport.DataImportHandler.inform(DataImportHandler.java:114)
at
org.apache.solr.core.SolrResourceLoader.i
On Thu, Apr 16, 2009 at 10:31 AM, Mani Kumar wrote:
> Aah, Bryan you got it ... Thanks!
> Noble: so i can hope that it'll be fixed soon :) thank you for fixing it
> ...
> please lemme know when its done..
>
This is fixed in trunk. The next nightly build should have this fix.
--
Regards,
Shalin
Aah, Bryan you got it ... Thanks!
Noble: so i can hope that it'll be fixed soon :) thank you for fixing it ...
please lemme know when its done..
Thanks!
Mani Kumar
2009/4/16 Noble Paul നോബിള് नोब्ळ्
> Hi Bryan,
> Thanks a lot. It is invoking the wrong method
>
> it should have been
> bsz = con
Hi Bryan,
Thanks a lot. It is invoking the wrong method
it should have been
bsz = context.getVariableResolver().replaceTokens(bsz);
it was a silly mistake
--Noble
On Thu, Apr 16, 2009 at 2:13 AM, Bryan Talbot wrote:
> I think there is a bug in the 1.4 daily builds of data import handler which
I think there is a bug in the 1.4 daily builds of data import handler
which is causing the batchSize parameter to be ignored. This was
probably introduced with more recent patches to resolve variables.
The affected code is in JdbcDataSource.java
String bsz = initProps.getProperty("batch
DIH streams 1 row at a time.
DIH is just a component in Solr. Solr indexing also takes a lot of memory
On Tue, Apr 14, 2009 at 12:02 PM, Mani Kumar wrote:
> Yes its throwing the same OOM error and from same place...
> yes i will try increasing the size ... just curious : how this dataimport
> wo
Yes its throwing the same OOM error and from same place...
yes i will try increasing the size ... just curious : how this dataimport
works?
Does it loads the whole table into memory?
Is there any estimate about how much memory it needs to create index for 1GB
of data.
thx
mani
On Tue, Apr 14, 2
On Tue, Apr 14, 2009 at 11:36 AM, Mani Kumar wrote:
> Hi Shalin:
> yes i tried with batchSize="-1" parameter as well
>
> here the config i tried with
>
>
>
> driver="com.mysql.jdbc.Driver"
> url="jdbc:mysql://localhost/mydb_development"
> user="root" password="**" />
>
>
> I hope i have u
Hi Shalin:
yes i tried with batchSize="-1" parameter as well
here the config i tried with
I hope i have used batchSize parameter @ right place.
Thanks!
Mani Kumar
On Tue, Apr 14, 2009 at 11:24 AM, Shalin Shekhar Mangar <
s
On Tue, Apr 14, 2009 at 11:18 AM, Mani Kumar wrote:
> Here is the stack trace:
>
> notice in stack trace * "at
> com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1749)"*
>
> It looks like that its trying to read whole table into memory at a time. n
> thts y getting OOM.
>
>
Mani, the data-
Hi Noble:
But the question is how much memory? is there any rules or something like
that? so that i can estimate the how much memory it requires?
Yeah i can increase it upto 800MB max will try it and let you know
Thanks!
Mani
2009/4/14 Noble Paul നോബിള് नोब्ळ्
> DIH itself may not be con
Here is the stack trace:
notice in stack trace * "at
com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1749)"*
It looks like that its trying to read whole table into memory at a time. n
thts y getting OOM.
Apr 14, 2009 11:15:01 AM org.apache.solr.handler.dataimport.DataImporter
doFullImpo
DIH itself may not be consuming so much memory. It also includes the
memory used by Solr.
Do you have a hard limit on 400MB , is it not possible to increase it?
On Tue, Apr 14, 2009 at 11:09 AM, Mani Kumar wrote:
> Hi ILAN:
>
> Only one query is required to generate a document ...
> Here is my
Hi ILAN:
Only one query is required to generate a document ...
Here is my data-config.xml
and other useful info:
mysql> select * from items
+--+
| count(*) |
+--+
| 900051 |
+--+
1 row in set (0.00 sec)
Each
Depending on your dataset and how your queries look you may very likely
need to increase to a larger heap size. How many queries and rows are
required for each of your documents to be generated?
Ilan
On 4/13/09 12:21 PM, Mani Kumar wrote:
Hi Shalin:
Thanks for quick response!
By defaults i
Hi Shalin:
Thanks for quick response!
By defaults it was set to 1.93 MB.
But i also tried it with following command:
$ ./apache-tomcat-6.0.18/bin/startup.sh -Xmn50M -Xms300M -Xmx400M
I also tried tricks given on
http://wiki.apache.org/solr/DataImportHandlerFaq page.
what should i try next ?
On Mon, Apr 13, 2009 at 11:57 PM, Mani Kumar wrote:
> Hi All,
> I am trying to setup a Solr instance on my macbook.
>
> I get following errors when m trying to do a full db import ... please help
> me on this
>
> java.lang.OutOfMemoryError: Java heap space
>at
>
> org.apache.solr.handler.d
I am using Tomcat ...
On Mon, Apr 13, 2009 at 11:57 PM, Mani Kumar wrote:
> Hi All,
> I am trying to setup a Solr instance on my macbook.
>
> I get following errors when m trying to do a full db import ... please help
> me on this
>
> Apr 13, 2009 11:53:28 PM
> org.apache.solr.handler.dataimport.
Hi All,
I am trying to setup a Solr instance on my macbook.
I get following errors when m trying to do a full db import ... please help
me on this
Apr 13, 2009 11:53:28 PM org.apache.solr.handler.dataimport.JdbcDataSource$1
call
INFO: Creating a connection for entity slideshow with URL:
jdbc:mysq
is message in context:
http://www.nabble.com/Integrating-Solr-with-JBOSS-to-run-the-dataimporter-tp20137228p20137228.html
Sent from the Solr - User mailing list archive at Nabble.com.
Currently there is nothing . There is a hackish way to achieve it.
DIH allows to read values from request params and use it in the templates.
eg: query="select * from atable where id > ${dataimporter.request.last_id}"
so, DIH must be invoked with the extra request param last_id like this
http://:/
Hi all,
I'm using the dataimport patch, and it is working wonderfully. I
had one question though. Is there a way to pick some other "last_index"
value that could be selected on a per entity basis? Right now it is
just last_index_time which is great, except for a table that doesn't
have modified
a field/column id. The fact that you omitted
>> it, was an oversight, or not necessary, I suppose.
>> 3) Your statement about uniqueKey needs some clarification:
>> - I do have the following in my schema.xml: id, I
>> also added comboId. Are both necessary?
>>
&
t; also added comboId. Are both necessary?
>
> Thanks
>
> ** julio
>
> -Original Message-
> From: Shalin Shekhar Mangar [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, June 04, 2008 11:01 AM
> To: solr-user@lucene.apache.org
> Subject: Re: How to describe 2 entiti
@lucene.apache.org
Subject: Re: How to describe 2 entities in dataConfig for the DataImporter?
Hi Julio,
The following are my assumptions after studying your given data-config
examples
1. The column id is present in all three tables -- vets, owners and pets.
2. Vets and owners are independent of each other
ners-${owners.id}"/>
>
>
>
>
>
>
> * CASE 3 (Commented out "vets" to simplify case. Nested entities don't
> work:
> "Document [null] missing required field: id")
>
>
>
> query="select id,first_name
[null] missing required field: id")
The debug output for one row from the dataImporter while iterating over pets
where the first row owner_id=1 (which gets transformed to 'owners-1' -where
owner_id is a fk to owners id c
to either remove the 'owners-' suffix before
>> comparing, or append the same suffix to the pets.owner_id value prior to
>> comparing.
>>
>> Thanks
>>
>> ** julio
>>
>> -Original Message-
>> From: Noble Paul ??? ?? [mail
sage-
> From: Noble Paul ??? ?????? [mailto:[EMAIL PROTECTED]
> Sent: Monday, June 02, 2008 9:20 PM
> To: solr-user@lucene.apache.org
> Subject: Re: How to describe 2 entities in dataConfig for the DataImporter?
>
> hi Julio,
> delete my previous response. In your sc
OTECTED]
Sent: Monday, June 02, 2008 9:20 PM
To: solr-user@lucene.apache.org
Subject: Re: How to describe 2 entities in dataConfig for the DataImporter?
hi Julio,
delete my previous response. In your schema , 'id' is the uniqueKey.
make 'comboid' the unique key. Becau
-
>> select id,first_name,last_name FROM owners
>> 0:0:0.15
>> --- row #1-
>> 1
>> George
>> Franklin
>> -
>> -
>>-
>>
:0:0.0
>
>
>
> +
>
>
> Thanks again
>
> ** julio
>
>
> -Original Message-
> From: Shalin Shekhar Mangar [mailto:[EMAIL PROTECTED]
> Sent: Saturday, May 31, 2008 10:26 AM
> To: solr-user@lucene.apache.org
> Subject: Re
e: How to describe 2 entities in dataConfig for the DataImporter?
Hi Julio,
I've fixed the bug, can you please replace the exiting
TemplateTransformer.java in the SOLR-469.patch and use the attached
TemplateTransformer.java file. We'll add the changes to our next patch.
Sorry for al
2 entities in dataConfig for the DataImporter?
Hi Julio,
I've fixed the bug, can you please replace the exiting
TemplateTransformer.java in the SOLR-469.patch and use the attached
TemplateTransformer.java file. We'll add the changes to our next patch.
Sorry for all the trouble.
On S
/>
>>
>>
>>
>>
>>
>>
>> Thanks again.
>>
>> ** julio
>>
>> -Original Message-
>> From: Shalin Shekhar Mangar [mailto:[EMAIL PROTECTED]
>> Sent: Friday, May 30, 2008 11:38 AM
>> To: s
-
> From: Shalin Shekhar Mangar [mailto:[EMAIL PROTECTED]
> Sent: Friday, May 30, 2008 11:38 AM
> To: solr-user@lucene.apache.org
> Subject: Re: How to describe 2 entities in dataConfig for the DataImporter?
>
> The surname is used just as an example of a field.
>
> The
ekhar Mangar [mailto:[EMAIL PROTECTED]
Sent: Friday, May 30, 2008 11:38 AM
To: solr-user@lucene.apache.org
Subject: Re: How to describe 2 entities in dataConfig for the DataImporter?
The surname is used just as an example of a field.
The NullPointerException is because the same field "id" tr
1 - 100 of 113 matches
Mail list logo