Re: Escaping & character at Query
Hi, I meant: solr/select?q="kelile&dimle" Cheers. 2013/5/29 Jack Krupansky > You need to UUEncode the & with %26: > > ...solr/select?q=kelile%**26dimle > > Normally, & introduces a new URL query parameter in the URL. > > -- Jack Krupansky > > -Original Message- From: Furkan KAMACI Sent: Wednesday, May 29, > 2013 10:55 AM To: solr-user@lucene.apache.org Subject: Escaping & > character at Query > I use Solr 4.2.1 and I analyze that keyword: > > kelile&dimle > > at admin page: > > WT > > kelile&dimle > > SF > > kelile&dimle > > TLCF > > kelile&dimle > > However when I escape that charter and search it: > > solr/select?q=kelile\&dimle > > here is what I see: > > > > 0 > 148 > > > *kelile\* > > > > I have edismax as default query parser. How can I escape that "&" > character, why it doesn't like that?: > > kelile\&dimle > > Any ideas? >
Re: Escaping & character at Query
Hi, try with double quotation marks (" "). Carlos. 2013/5/29 Furkan KAMACI > I use Solr 4.2.1 and I analyze that keyword: > > kelile&dimle > > at admin page: > > WT > > kelile&dimle > > SF > > kelile&dimle > > TLCF > > kelile&dimle > > However when I escape that charter and search it: > > solr/select?q=kelile\&dimle > > here is what I see: > > > > 0 > 148 > > > *kelile\* > > > > I have edismax as default query parser. How can I escape that "&" > character, why it doesn't like that?: > > kelile\&dimle > > Any ideas? >
Re: Facet pivot 50.000.000 different values
In case anyone is interested, I solved my problem using the "grouping" feature: *query* --> "filter" query (if any) *field* --> field that you want to count (in my case field "B") SolrQuery solrQuery = new SolrQuery(query); solrQuery.add("group", "true"); solrQuery.add("group.field", "B"); // Group by the field solrQuery.add("group.ngroups", "true"); solrQuery.setRows(0); And in the response *getNGroups()* will give you the total number of distinct values (total number of "B" distinct values) Cheers, Carlos. 2013/5/18 Carlos Bonilla > Hi Mikhail, > yes the thing is that I need to take into account different queries and > that's why I can't use the Terms Component. > > Cheers. > > > 2013/5/17 Mikhail Khludnev > >> On Fri, May 17, 2013 at 12:47 PM, Carlos Bonilla >> wrote: >> >> > We >> > only need to calculate how many different "B" values have more than 1 >> > document but it takes ages >> > >> >> Carlos, >> It's not clear whether you need to take results of a query into account or >> just gather statistics from index. if later you can just enumerate terms >> and watch into TermsEnum.docFreq() . Am I getting it right? >> >> >> -- >> Sincerely yours >> Mikhail Khludnev >> Principal Engineer, >> Grid Dynamics >> >> <http://www.griddynamics.com> >> >> > >
Re: solr starting time takes too long
Hi Lisheng, I had the same problem when I enabled the "autoSoftCommit" in solrconfig.xml. If you have it enabled, disabling it could fix your problem, Cheers. Carlos. 2013/5/22 Zhang, Lisheng > > Hi, > > We are using solr 3.6.1, our application has many cores (more than 1K), > the problem is that solr starting took a long time (>10m). Examing log > file and code we found that for each core we loaded many resources, but > in our app, we are sure we are always using the same solrconfig.xml and > schema.xml for all cores. While we can config schema.xml to be shared, > we cannot share SolrConfig object. But looking inside SolrConfig code, > we donot use any of the cache. > > Could we somehow change config (or source code) to share resource between > cores to reduce solr starting time? > > Thanks very much for helps, Lisheng >
Re: Facet pivot 50.000.000 different values
Hi Mikhail, yes the thing is that I need to take into account different queries and that's why I can't use the Terms Component. Cheers. 2013/5/17 Mikhail Khludnev > On Fri, May 17, 2013 at 12:47 PM, Carlos Bonilla > wrote: > > > We > > only need to calculate how many different "B" values have more than 1 > > document but it takes ages > > > > Carlos, > It's not clear whether you need to take results of a query into account or > just gather statistics from index. if later you can just enumerate terms > and watch into TermsEnum.docFreq() . Am I getting it right? > > > -- > Sincerely yours > Mikhail Khludnev > Principal Engineer, > Grid Dynamics > > <http://www.griddynamics.com> > >
Re: Facet pivot 50.000.000 different values
Sorry, 16 GB RAM (not 8). 2013/5/17 Carlos Bonilla > Hi, > To calculate some stats we are using a field "B" with 50.000. > different values as facet pivot in a schema that contains 200.000.000 > documents. We only need to calculate how many different "B" values have > more than 1 document but it takes ages Is there any other better > way/configuration to do this? > > Configuration: > Solr 4.2.1 > JVM Java 7 > Max Java Heap size : 12Gb > 8 GB RAM > Dual Core > > Many thanks. >
Facet pivot 50.000.000 different values
Hi, To calculate some stats we are using a field "B" with 50.000. different values as facet pivot in a schema that contains 200.000.000 documents. We only need to calculate how many different "B" values have more than 1 document but it takes ages Is there any other better way/configuration to do this? Configuration: Solr 4.2.1 JVM Java 7 Max Java Heap size : 12Gb 8 GB RAM Dual Core Many thanks.
Re: Facet which takes sum of a field into account for result values
Hi, have a look at http://wiki.apache.org/solr/TermsComponent. Regards, Carlos. 2013/5/8 ld > Within MySQL it is possible to get the Top N results while summing a > particular column in the database. For example: > SELECT ip_address, SUM(ip_count) AS count FROM table GROUP BY ip_address > ORDER BY count DESC LIMIT 5 > > This will return the top 5 ip_address based on the sum of ip_count. > > Is there a way to have a Facet query within Solr do the same? In other > words, count an entry as if there were 'ip_count entries', not just one? > > I have used the Stats component and faceting but this gives me all the > records, there is no way to limit to the top 10 sums. My data set may have > millions of records with much variation on IP address so this wouldn’t > work. > > I have also considered adding ip_count number of entries when writing to > solr but this causes some issues with the unique ID shared with legacy code > that still uses MySQL. > > Any help is appreciated. > > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Facet-which-takes-sum-of-a-field-into-account-for-result-values-tp4061588.html > Sent from the Solr - User mailing list archive at Nabble.com. >