sat wrote:
> What do you mean by updating the viewer ?
Whatever you are using to viewing the file timestamps seems to adjust for your
local timezone (which is normally a fine thing). How you tell your viewer to
show UTC instead, I don't know. If you use a command line tool under Linux, I
would
hi
i am new in solr, i face to a problem and need any solution to solve that.
i have a field that this field need to update frequently.
"image i need to index all post of member of a social app"
in this case i need to store and index all posts field like caption ,
image, title,comments ,etc
but que
: Strange enough, the following code gives different errors:
:
: assertQ(
I'm not sure what exactly assertQ will do in a distributed test like this
... probably nothing good. you'll almost certainly want to stick with
the distributed indexDoc() and query* methods and avoid assertU and
as
Hello Irina,
I looked through DIH sources, it seems like you exceed its' design. Such
leakages are not possible in it. I can only suggest to call private method
through reflection.
org.apache.solr.handler.dataimport.ContextImpl.getDocument().
Perhaps, you can pass some state for Transformers via
C
Mark,
Thanks for your feedback. Making Solr handy is important for us.
On Fri, Sep 4, 2015 at 1:43 PM, Mark Fenbers wrote:
> Chris,
>
> The document "Uploading Structured Data Store Data with the Data Import
> Handler" has a number of references to solrconfig.xml, starting on Page 2
> and contin
Thanks a lot Shawn, for the details, it is very helpful !
--
View this message in context:
http://lucene.472066.n3.nabble.com/any-easy-way-to-find-out-when-a-core-s-index-physical-file-has-been-last-updated-tp4227044p4227274.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 9/4/2015 10:40 AM, Mark Fenbers wrote:
> Caused by: java.lang.ClassCastException: class
> org.apache.solr.handler.dataimport.DataImportHandler
> at java.lang.Class.asSubclass(Class.java:3208)
This is the root cause of your inability to create the dataimport
handler. This is a different me
Upayavira ,
The docs are all unique. In my example the two docs are considered to be
dupes because the requested fields all have the same values.
fields AB C D E
Doc 1: apple, 10, 15, bye, yellow
Doc 2: apple, 12, 15, by, green
The two docs are certainly unique. Say they are on
Are you using a single instance or cloud? What version of Solr are you using?
In your solrconfig.xml is the path to where you copied your library specified
in a tag? Do you have a jar file for the Postgres JDBC driver in your
lib directory as well?
For simple setup in 5.x I copy the jars to
On 9/4/2015 10:14 AM, Renee Sun wrote:
> I will start use autocommit with confidence it will greatly help reducing
> the false commit requests (a lot) from processes in our system.
>
> Regarding the solr version, it is actually a big problem we have to resolve
> sooner or later.
>
> When we upgrade
On 9/4/15, 7:06 AM, "Yonik Seeley" wrote:
>
>Lucene seems to always be changing it's execution model, so it can be
>difficult to keep up. What version of Solr are you using?
>Lucene also changed how filters work, so now, a filter is
>incorporated with the query like so:
>
>query = new BooleanQ
I believe Arcadius has a point, but I still think the answer is no.
ZooKeeper clients (Solr/SolrJ) connect to a single ZooKeeper server
instance at a time, and keep that session open to that same server as long
as they can/need. During this time, all interactions between the client and
the ZK ense
Greetings,
I'm moving on from the tutorials and trying to setup an index for my own
data (from a database). All I did was add the following to the
solrconfig.xml (taken verbatim from the example in Solr documentation,
except for the name="config" pathname) and I get an error in the
web-based
Shawn, thanks so much, and this user forum is so helpful!
I will start use autocommit with confidence it will greatly help reducing
the false commit requests (a lot) from processes in our system.
Regarding the solr version, it is actually a big problem we have to resolve
sooner or later.
When we
Arcadius:
Note that one of the more recent changes is "per collection states" in
ZK. So rather
than have one huge clusterstate.json that gets passed out to to all
collection on any
change, the listeners can now listen only to specific collections.
Reduces the "thundering herd" problem.
Best,
Eri
Mark:
Right, the problem with Google searches (as you well know) is that you
get random snippets from all over the place, ones that often assume
some background knowledge.
There are several good books around that tend to have things arranged
progressively that might be a good investment, here's a
How do we do a hashing of the content?
Regards,
Edwin
On 4 September 2015 at 17:37, Arcadius Ahouansou
wrote:
> You could try using a hash of the content?
> On Sep 4, 2015 9:00 AM, "Zheng Lin Edwin Yeo"
> wrote:
>
> > Hi,
> >
> > I'm trying out on the De-Duplication.I've tried to create a new
Thank you very much for your reply.
What do you mean by updating the viewer ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-last-modified-different-than-filesystem-last-modified-tp4226894p4227164.html
Sent from the Solr - User mailing list archive at Nabble.com.
>> This is part of a bigger issue we should work at doing better at for
>> Solr 6: debugability / supportability.
>> For a specific request, what took up the memory, what cache misses or
>> cache instantiations were there, how much request-specific memory was
>> allocated, how much shared memory wa
Yes please.:
http://www.amazon.com/Solr-Troubleshooting-Maintenance-Alexandre-Rafalovitch/dp/1491920149/
:-)
Regards,
Alex.
Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/
On 4 September 2015 at 10:30, Yonik Seeley wrote:
> On Fri, Sep 4,
Hi,
I have a requirement that in the field user want suggestion like i enter
nice comes nicer,nicest.
but the problem is that i have a 40 millions of data and same number of solr
field.
how it possible to make suggestion all number of field ?
Thanks
--
View this message in context:
http
Hi,
I have a requirement that in the field user want suggestion like i enter
nice comes nicer,nicest.
but the problem is that i have a 40 millions of data and same number of solr
field.
how it possible to make suggestion all number of field ?
Thanks
--
View this message in context:
http:/
On Fri, Sep 4, 2015 at 10:18 AM, Alexandre Rafalovitch
wrote:
> Yonik,
>
> Is this all visible on query debug level?
Nope, unfortunately not.
This is part of a bigger issue we should work at doing better at for
Solr 6: debugability / supportability.
For a specific request, what took up the memor
Yonik,
Is this all visible on query debug level? Would it be effective to ask
to run both queries with debug enabled and to share the expanded query
value? Would that show up the differences between Lucene
implementations you described?
(Looking for troubleshooting tips to reuse).
Regards,
Al
On Thu, Sep 3, 2015 at 4:45 PM, Jeff Wartes wrote:
>
> I have a query like:
>
> q=&fq=enabled:true
>
> For purposes of this conversation, "fq=enabled:true" is set for every query,
> I never open a new searcher, and this is the only fq I ever use, so the
> filter cache size is 1, and the hit rati
Noble,
Does SOLR-8000 need to be re-opened? Has anyone else been able to test the
restart fix?
At startup, these are the log messages that say there is no security
configuration and the plugins aren’t being used even though security.json is in
Zookeeper:
2015-09-04 08:06:21.205 INFO (main)
vatu...@yandex.ru wrote:
> Hello. I work with Solr 4.10.
> I use DIH and some custom java Transformers to synchronize my Solr index with
> the database (MySQL is used)
> Is there any way to change the fields in root entity from the sub entity?
I don't think that works, but if you are writing you
There are no download links for 5.3.x branch till we do a bug fix release
If you wish to download the trunk nightly (which is not same as 5.3.0)
check here
https://builds.apache.org/job/Solr-Artifacts-trunk/lastSuccessfulBuild/artifact/solr/package/
If you wish to get the binaries for 5.3 branc
It's possible that the ReducerStream's buffer can grow too large if
document groups are very large. But the ReducerStream only needs to hold
one group at a time in memory. The RollupStream, in trunk, has a grouping
implementation that doesn't hang on to all the Tuples from a group. You
could also i
Chris,
The document "Uploading Structured Data Store Data with the Data Import
Handler" has a number of references to solrconfig.xml, starting on Page
2 and continuing on page 3 in the section "Configuring solrconfig.xml".
It also is mentioned on Page 5 in the "Property Writer" and the "Data
Hello. I work with Solr 4.10.
I use DIH and some custom java Transformers to synchronize my Solr index
with the database (MySQL is used)
Is there any way to change the fields in root entity from the sub entity?
I mean something like this
public class MySubEntityTransformer extends
org.apache.solr.
Hi Kevin/Noble,
What is the download link to take the latest? What are the steps to compile
it, test and use?
We also have a use case to have this feature in solr too. Therefore, wanted
to test and above info would help a lot to get started.
Thanks.
On Fri, Sep 4, 2015 at 1:45 PM, Kevin Lee wr
Hi Shawn,
I tried this
SystemDefaultHttpClient cl = new SystemDefaultHttpClient();
HttpSolrClient solrSvr = new HttpSolrClient(url, cl);
And it worked. Thanks a lot for your help.
Regards,
Firas
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Friday, September
Hi Guys,
thanks for the Answers you help me alot. I wrote a php scipt for this
Problem.
Thank you
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-Fixed-length-file-tp4225807p4227163.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Shawn,
Thanks for your response. I am not using http client directly. I am using
SolrHttpClient which is using it so I have no control.
Below is a snippet of my test code:
HttpSolrClient solrSvr = new HttpSolrClient(url);
SolrQuery query=new SolrQuery();
query.setQuery("xyz");
Strange enough, the following code gives different errors:
assertQ(
req("q", "*:*", "debug", "true", "indent", "true"),
"//result/doc[1]/str[@name='id'][.='1']",
"//result/doc[2]/str[@name='id'][.='2']",
"//result/doc[3]/str[@name='id'][.='3']",
Adding json.nl=map instead of wt.json
returns results in xml format.
0
2
You could try using a hash of the content?
On Sep 4, 2015 9:00 AM, "Zheng Lin Edwin Yeo" wrote:
> Hi,
>
> I'm trying out on the De-Duplication.I've tried to create a new signature
> field in schema.xml
> multiValued="false" />
>
> I've also added the following in solrconfig.xml.
>
>
>
> t
Hello - i am trying to create some tests using BaseDistributedSearchTestCase
but two errors usually appear. Consider the following test:
@Test
@ShardsFixed(num = 3)
public void test() throws Exception {
del("*:*");
index(id, "1", "lang", "en", "text", "this is some text");
inde
I don't have a code snippet - I just found it in the solrj source code.
As to using JSON, I'm not sure of the structure of the JSON you are
getting back, but you might find adding json.nl=map, which changes the
way it returns named lists, which may be easier to parse.
Upayavira
On Fri, Sep 4, 20
This is the code which i have written to get the stemmed word.
public class URLConnectionReader {
public static void main(String[] args) throws Exception {
URL solr = new URL(
"http://localhost:8983/solr/
"+args[0]+"/analysis/field?wt=json&analysis.showmatch=true&analys
Thanks, I downloaded the source and compiled it and replaced the jar file in
the dist and solr-webapp’s WEB-INF/lib directory. It does seem to be
protecting the Collections API reload command now as long as I upload the
security.json after startup of the Solr instances. If I shutdown and bring
Hi,
I'm trying out on the De-Duplication.I've tried to create a new signature
field in schema.xml
I've also added the following in solrconfig.xml.
true
signature
false
content
solr.processor.Lookup3Signature
However, I can't do a copyField of content into this signature field as
Hello Shawn.
This question was raised because IMHO, apart from leader election, there
are other load-generating activities such as all 10 solrj
clients+solrCloudNodes listening to changes on clusterstate.json/state.json
and downloading the whole file in case there is a change... And this would
have
Yes, look at the one I mentioned further up in this thread, which is a
part of SolrJ: FieldAnalysisRequest
That uses the same HTTP call in the backend, but formats the result in a
Java friendly manner.
Upayavira
On Fri, Sep 4, 2015, at 05:52 AM, Ritesh Sinha wrote:
> Yeah, I got. Thanks.
>
> It
45 matches
Mail list logo