Aw: Re: Re: Facet ranges and stats

2017-06-01 Thread Per Newgro
Thank you for your offer. But i think i need to think about the concept at all.
I need to configure the limit in the database and use in in all appropriate 
places.
I already have a clue how to do it - but lack of time :-)

> Gesendet: Donnerstag, 01. Juni 2017 um 15:00 Uhr
> Von: "Susheel Kumar" <susheel2...@gmail.com>
> An: solr-user@lucene.apache.org
> Betreff: Re: Re: Facet ranges and stats
>
> Great, it worked out.  If you want to share where and in what code you have
> 90 configured, we can brainstorm if we can simplify it to have only one
> place.
> 
> On Thu, Jun 1, 2017 at 3:16 AM, Per Newgro <per.new...@gmx.ch> wrote:
> 
> > Thanks for your support.
> >
> > Because the null handling is one of the important things i decided to use
> > another way.
> >
> > I added a script in my data import handler that decides if object was
> > audited
> >   function auditComplete(row) {
> > var total = row.get('TOTAL');
> > if (total == null || total < 90) {
> >   row.remove('audit_complete');
> > } else {
> >   row.put('audit_complete', 1);
> > }
> > return row;
> >   }
> >
> > When i add/update a document the same will be done in my code.
> > So i can do my query based on audit_complete field, because i only need to
> > know how many are complete and how many not.
> >
> > A drawback is surely that the "complete" limit of 90 is now implemented in
> > two places (DIH script and my code).
> > But so far i can life with it.
> >
> > Thank you
> > Per
> >
> > > Gesendet: Mittwoch, 31. Mai 2017 um 17:28 Uhr
> > > Von: "Susheel Kumar" <susheel2...@gmail.com>
> > > An: solr-user@lucene.apache.org
> > > Betreff: Re: Facet ranges and stats
> > >
> > > Hi,
> > >
> > > You may want to explore the JSON facets.  The closest I can go to meet
> > > above requirement is below query (replace inStock with your rank field
> > and
> > > price below with total.  Null handling something also will have to look.
> > >
> > > --
> > > Susheel
> > >
> > > curl http://localhost:8983/solr/techproducts/query -d 'q=*:*&
> > >
> > >   json.facet={inStocks:{ terms:{
> > > field: inStock,
> > > limit: 5,
> > > facet:{
> > > priceRange:{ range:{  // nested terms
> > > facet will be executed for the top 5 genre buckets of the parent
> > >       field: price,
> > >   start : 0,
> > >   end : 90,
> > >   gap : 90,
> > >   other : "after"
> > > }}
> > >   }
> > >   }}
> > >   }'
> > >
> > >
> > > On Wed, May 31, 2017 at 7:33 AM, Per Newgro <per.new...@gmx.ch> wrote:
> > >
> > > > Hello,
> > > >
> > > > i would like to generate some stats on my facets. This is working so
> > far.
> > > > My problem is that i don't know how to generate Ranges on my facets and
> > > > calculate the stats for it.
> > > >
> > > > I have two fields in my schema -> rank(string) and total(float,
> > nullable)
> > > > Rank can be A or B or C. In case my object was audited document
> > contains a
> > > > total value (78 or 45 or ...). Otherwise the value is null.
> > > >
> > > > What i need to calculate per Rank is the count of documents having a
> > total
> > > > value >= 90 and the count of the other documents (null or < 90).
> > > >
> > > > My solution would be to implement 2 queries. But what i learned so far:
> > > > Solr is build to avoid that.
> > > >
> > > > Can you please give me hint how i could solve this problem.
> > > >
> > > > Thanks for your support
> > > > Per
> > > >
> > >
> >
> 


Aw: Re: Facet ranges and stats

2017-06-01 Thread Per Newgro
Thanks for your support.

Because the null handling is one of the important things i decided to use 
another way.

I added a script in my data import handler that decides if object was audited
  function auditComplete(row) {
var total = row.get('TOTAL');
if (total == null || total < 90) {
  row.remove('audit_complete');
} else {
  row.put('audit_complete', 1);
}
return row;
  }

When i add/update a document the same will be done in my code.
So i can do my query based on audit_complete field, because i only need to know 
how many are complete and how many not.

A drawback is surely that the "complete" limit of 90 is now implemented in two 
places (DIH script and my code).
But so far i can life with it.

Thank you
Per

> Gesendet: Mittwoch, 31. Mai 2017 um 17:28 Uhr
> Von: "Susheel Kumar" <susheel2...@gmail.com>
> An: solr-user@lucene.apache.org
> Betreff: Re: Facet ranges and stats
>
> Hi,
> 
> You may want to explore the JSON facets.  The closest I can go to meet
> above requirement is below query (replace inStock with your rank field and
> price below with total.  Null handling something also will have to look.
> 
> -- 
> Susheel
> 
> curl http://localhost:8983/solr/techproducts/query -d 'q=*:*&
> 
>   json.facet={inStocks:{ terms:{
> field: inStock,
> limit: 5,
> facet:{
> priceRange:{ range:{  // nested terms
> facet will be executed for the top 5 genre buckets of the parent
>   field: price,
>   start : 0,
>   end : 90,
>   gap : 90,
>   other : "after"
> }}
>           }
>   }}
>   }'
> 
> 
> On Wed, May 31, 2017 at 7:33 AM, Per Newgro <per.new...@gmx.ch> wrote:
> 
> > Hello,
> >
> > i would like to generate some stats on my facets. This is working so far.
> > My problem is that i don't know how to generate Ranges on my facets and
> > calculate the stats for it.
> >
> > I have two fields in my schema -> rank(string) and total(float, nullable)
> > Rank can be A or B or C. In case my object was audited document contains a
> > total value (78 or 45 or ...). Otherwise the value is null.
> >
> > What i need to calculate per Rank is the count of documents having a total
> > value >= 90 and the count of the other documents (null or < 90).
> >
> > My solution would be to implement 2 queries. But what i learned so far:
> > Solr is build to avoid that.
> >
> > Can you please give me hint how i could solve this problem.
> >
> > Thanks for your support
> > Per
> >
> 


Facet ranges and stats

2017-05-31 Thread Per Newgro
Hello,

i would like to generate some stats on my facets. This is working so far. My 
problem is that i don't know how to generate Ranges on my facets and calculate 
the stats for it.

I have two fields in my schema -> rank(string) and total(float, nullable)
Rank can be A or B or C. In case my object was audited document contains a 
total value (78 or 45 or ...). Otherwise the value is null.

What i need to calculate per Rank is the count of documents having a total 
value >= 90 and the count of the other documents (null or < 90).

My solution would be to implement 2 queries. But what i learned so far: Solr is 
build to avoid that.

Can you please give me hint how i could solve this problem.

Thanks for your support
Per


Aw: Re: Solr 5.5.0 MSSQL Datasource Example

2017-02-08 Thread Per Newgro
Thank you Fuad,

with dbcp2 BasicDataSource it is working

1st i need to add the libraries to server/lib/ext
commons-dbcp2-2.1.1.jar
commons-logging-1.2.jar
commons-pool2-2.4.2.jar
The current version i've found in http://mvnrepository.com/search?q=dbcp

Then my DataSource looks like this


java:comp/env/jdbc/myds


com.microsoft.sqlserver.jdbc.SQLServerDriver
jdbc:sqlserver://ip;databaseName=my_db
user
password
25
5000
SELECT 1
-1




Thanks for your support
Per

> Gesendet: Dienstag, 07. Februar 2017 um 21:39 Uhr
> Von: "Fuad Efendi" <f...@efendi.ca>
> An: "Per Newgro" <per.new...@gmx.ch>, solr-user@lucene.apache.org
> Betreff: Re: Solr 5.5.0 MSSQL Datasource Example
>
> Perhaps this answers your question:
> 
> 
> http://stackoverflow.com/questions/27418875/microsoft-sqlserver-driver-datasource-have-password-empty
> 
> 
> Try different one as per Eclipse docs,
> 
> http://www.eclipse.org/jetty/documentation/9.4.x/jndi-datasource-examples.html
> 
> 
> 
> 
>  
> 
>  jdbc/DSTest
> 
>  
> 
> 
> 
>user
> 
>pass
> 
>dbname
> 
>localhost
> 
>1433
> 
> 
> 
>  
> 
> 
> 
> 
> 
> 
> --
> 
> Fuad Efendi
> 
> (416) 993-2060
> 
> http://www.tokenizer.ca
> Search Relevancy, Recommender Systems
> 
> 
> From: Per Newgro <per.new...@gmx.ch> <per.new...@gmx.ch>
> Reply: solr-user@lucene.apache.org <solr-user@lucene.apache.org>
> <solr-user@lucene.apache.org>
> Date: February 7, 2017 at 10:15:42 AM
> To: solr-user-group <solr-user@lucene.apache.org>
> <solr-user@lucene.apache.org>
> Subject:  Solr 5.5.0 MSSQL Datasource Example
> 
> Hello,
> 
> has someone a working example for MSSQL Datasource with 'Standard Microsoft
> SQL Driver'.
> 
> My environment:
> debian
> Java 8
> Solr 5.5.0 Standard (download and installed as service)
> 
> server/lib/ext
> sqljdbc4-4.0.jar
> 
> Global JNDI resource defined
> server/etc/jetty.xml
> 
> 
> java:comp/env/jdbc/mydb
> 
> 
> ip
> mydb
> user
> password
> 
> 
> 
> 
> or 2nd option tried
> 
> 
> java:comp/env/jdbc/mydb
> 
> 
> jdbc:sqlserver://ip;databaseName=mydb;
> user
> password
> 
> 
> 
> 
> 
> collection1/conf/db-data-config.xml
> 
> 
> ...
> 
> This leads to SqlServerException login failed for user.
> at
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
> 
> at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:254)
> at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:84)
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:2908)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:2234)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.access$000(SQLServerConnection.java:41)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:2220)
> 
> at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696)
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1326)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:991)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:827)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerDataSource.getConnectionInternal(SQLServerDataSource.java:621)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerDataSource.getConnection(SQLServerDataSource.java:57)
> 
> at
> org.apache.solr.handler.dataimport.JdbcDataSource$1.getFromJndi(JdbcDataSource.java:256)
> 
> at
> org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:182)
> 
> at
> org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:172)
> 
> at
> org.apache.solr.handler.dataimport.JdbcDataSource.getConnection(JdbcDataSource.java:463)
> 
> at
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:309)
> 
> ... 12 more
> 
> But when i remove the jndi datasource and rewrite the dataimport data
> source to
> 
> 
> driver="com.microsoft.sqlserver.jdbc.SQLServerDriver" br/>
> url="jdbc:sqlserver://ip;databaseName=mydb"
> user="user" password="password" />
> ...
> 
> Then it works.
> But this way i need to configure the db in every core. I would like to
> avoid that.
> 
> Thanks
> Per
> 


Solr 5.5.0 MSSQL Datasource Example

2017-02-07 Thread Per Newgro
Hello,

has someone a working example for MSSQL Datasource with 'Standard Microsoft SQL 
Driver'.

My environment:
debian
Java 8
Solr 5.5.0 Standard (download and installed as service)

server/lib/ext
sqljdbc4-4.0.jar

Global JNDI resource defined
server/etc/jetty.xml


java:comp/env/jdbc/mydb


ip
mydb
user
password




or 2nd option tried


java:comp/env/jdbc/mydb


jdbc:sqlserver://ip;databaseName=mydb;
user
password





collection1/conf/db-data-config.xml

  
  ...

This leads to SqlServerException login failed for user.
at 
com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
at 
com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:254)
at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:84)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:2908)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:2234)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.access$000(SQLServerConnection.java:41)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:2220)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1326)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:991)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:827)
at 
com.microsoft.sqlserver.jdbc.SQLServerDataSource.getConnectionInternal(SQLServerDataSource.java:621)
at 
com.microsoft.sqlserver.jdbc.SQLServerDataSource.getConnection(SQLServerDataSource.java:57)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$1.getFromJndi(JdbcDataSource.java:256)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:182)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:172)
at 
org.apache.solr.handler.dataimport.JdbcDataSource.getConnection(JdbcDataSource.java:463)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:309)
... 12 more

But when i remove the jndi datasource and rewrite the dataimport data source to


...

Then it works.
But this way i need to configure the db in every core. I would like to avoid 
that.

Thanks
Per


Aw: 回复:Solr 5.5.0 Configure global jndi DS for dataimport

2017-02-07 Thread Per Newgro
Maybe someone is interested in solution:

AND
java:comp/env/jdbc/myds

Both need the complete lookup name.

Per


> Gesendet: Dienstag, 07. Februar 2017 um 10:29 Uhr
> Von: "Per Newgro" <per.new...@gmx.ch>
> An: solr-user@lucene.apache.org
> Betreff: Aw: 回复:Solr 5.5.0 Configure global jndi DS for dataimport
>
> Changed db-data-config.xml
> 
> 
> This leads to
> Caused by: javax.naming.NameNotFoundException; remaining name 'env/jdbc/myds'
> at org.eclipse.jetty.jndi.NamingContext.lookup(NamingContext.java:538)
> at org.eclipse.jetty.jndi.NamingContext.lookup(NamingContext.java:569)
> at org.eclipse.jetty.jndi.NamingContext.lookup(NamingContext.java:584)
> at 
> org.eclipse.jetty.jndi.java.javaRootURLContext.lookup(javaRootURLContext.java:108)
> at javax.naming.InitialContext.lookup(InitialContext.java:417)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$1.getFromJndi(JdbcDataSource.java:250)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:182)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:172)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource.getConnection(JdbcDataSource.java:463)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:309)
> ... 39 more
> 
> Do i need to install other libraries? Do i need to enable jndi?
> Can i configure something useful for logging?
> 
> Thanks for your support
> Per
> 
> > Gesendet: Dienstag, 07. Februar 2017 um 10:02 Uhr
> > Von: alias <524839...@qq.com>
> > An: solr-user <solr-user@lucene.apache.org>
> > Betreff: 回复:Solr 5.5.0 Configure global jndi DS for dataimport
> >
> > jndiName="java:comp/env/jdbc/myds"
> > 
> > 
> > -- 原始邮件 --
> > 发件人: "Per Newgro";<per.new...@gmx.ch>;
> > 发送时间: 2017年2月7日(星期二) 下午4:47
> > 收件人: "solr-user-group"<solr-user@lucene.apache.org>; 
> > 
> > 主题: Solr 5.5.0 Configure global jndi DS for dataimport
> > 
> > 
> > 
> > Hello,
> > 
> > I would like to configure a JNDI datasource for use in dataimport. From the 
> > documentation it shall be possible and easy.
> > 
> > My environment:
> > Debian
> > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-2~bpo8+1-b14)
> > Solr 5.5.0 downloaded and installed as service in /opt/solr
> > Installed core in /var/lib/solr/data/collection1
> > 
> > Solr is running and core can be managed.
> > 
> > Put into /opt/solr/server/lib
> > jetty-jndi-9.2.13.v20150730.jar
> > jetty-plus-9.2.13.v20150730.jar
> > Put into /opt/solr/server/lib/ext
> > sqljdbc4-4.0.jar
> > 
> > /opt/solr/server/etc/jetty.xml
> > ...
> > 
> > 
> > jdbc/myds
> > 
> > 
> >  > name="URL">jdbc:sqlserver://;databaseName=dbname;
> > user
> > password
> > 
> > 
> > 
> > ...
> > 
> > /var/lib/solr/data/collection1/conf/db-data-config.xml
> > 
> > 
> > 
> >  > name="bodyshop"
> > query="SELECT b.id as ID,
> >   customer_number as CUSTOMER_NUMBER,
> >   customer_name as CUSTOMER_NAME
> > FROM  schema.body_shops b
> >WHERE  '${dataimporter.request.clean}' != 'false'
> >   OR  b.last_modified > 
> > '${dataimporter.last_index_time}'">
> > ...
> > 
> > But all i get is an exception
> > Caused by: javax.naming.NameNotFoundException; remaining name 'jdbc/myds'
> > at 
> > org.eclipse.jetty.jndi.local.localContextRoot.lookup(localContextRoot.java:487)
> > at 
> > org.eclipse.jetty.jndi.local.localContextRoot.lookup(localContextRoot.java:533)
> > at javax.naming.InitialContext.lookup(InitialContext.java:417)
> > at 
> > org.apache.solr.handler.dataimport.JdbcDataSource$1.getFromJndi(JdbcDataSource.java:250)
> > at 
> > org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:182)
> > at 
> > org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:172)
> > at 
> > org.apache.solr.handler.dataimport.JdbcDataSource.getConnection(JdbcDataSource.java:463)
> > at 
> > org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:309)
> > ... 39 more
> > 
> > I've searched across the web for a solution but all i found did not work.
> > It would be great if someone could help me out.
> > 
> > Thanks
> > Per
>


Aw: 回复:Solr 5.5.0 Configure global jndi DS for dataimport

2017-02-07 Thread Per Newgro
Changed db-data-config.xml


This leads to
Caused by: javax.naming.NameNotFoundException; remaining name 'env/jdbc/myds'
at org.eclipse.jetty.jndi.NamingContext.lookup(NamingContext.java:538)
at org.eclipse.jetty.jndi.NamingContext.lookup(NamingContext.java:569)
at org.eclipse.jetty.jndi.NamingContext.lookup(NamingContext.java:584)
at 
org.eclipse.jetty.jndi.java.javaRootURLContext.lookup(javaRootURLContext.java:108)
at javax.naming.InitialContext.lookup(InitialContext.java:417)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$1.getFromJndi(JdbcDataSource.java:250)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:182)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:172)
at 
org.apache.solr.handler.dataimport.JdbcDataSource.getConnection(JdbcDataSource.java:463)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:309)
... 39 more

Do i need to install other libraries? Do i need to enable jndi?
Can i configure something useful for logging?

Thanks for your support
Per

> Gesendet: Dienstag, 07. Februar 2017 um 10:02 Uhr
> Von: alias <524839...@qq.com>
> An: solr-user <solr-user@lucene.apache.org>
> Betreff: 回复:Solr 5.5.0 Configure global jndi DS for dataimport
>
> jndiName="java:comp/env/jdbc/myds"
> 
> 
> ---------- 原始邮件 --
> 发件人: "Per Newgro";<per.new...@gmx.ch>;
> 发送时间: 2017年2月7日(星期二) 下午4:47
> 收件人: "solr-user-group"<solr-user@lucene.apache.org>; 
> 
> 主题: Solr 5.5.0 Configure global jndi DS for dataimport
> 
> 
> 
> Hello,
> 
> I would like to configure a JNDI datasource for use in dataimport. From the 
> documentation it shall be possible and easy.
> 
> My environment:
> Debian
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-2~bpo8+1-b14)
> Solr 5.5.0 downloaded and installed as service in /opt/solr
> Installed core in /var/lib/solr/data/collection1
> 
> Solr is running and core can be managed.
> 
> Put into /opt/solr/server/lib
> jetty-jndi-9.2.13.v20150730.jar
> jetty-plus-9.2.13.v20150730.jar
> Put into /opt/solr/server/lib/ext
> sqljdbc4-4.0.jar
> 
> /opt/solr/server/etc/jetty.xml
> ...
> 
> 
> jdbc/myds
> 
> 
>  name="URL">jdbc:sqlserver://;databaseName=dbname;
> user
> password
> 
> 
> 
> ...
> 
> /var/lib/solr/data/collection1/conf/db-data-config.xml
> 
> 
> 
>  name="bodyshop"
> query="SELECT b.id as ID,
>   customer_number as CUSTOMER_NUMBER,
>   customer_name as CUSTOMER_NAME
> FROM  schema.body_shops b
>WHERE  '${dataimporter.request.clean}' != 'false'
>   OR  b.last_modified > 
> '${dataimporter.last_index_time}'">
> ...
> 
> But all i get is an exception
> Caused by: javax.naming.NameNotFoundException; remaining name 'jdbc/myds'
> at 
> org.eclipse.jetty.jndi.local.localContextRoot.lookup(localContextRoot.java:487)
> at 
> org.eclipse.jetty.jndi.local.localContextRoot.lookup(localContextRoot.java:533)
> at javax.naming.InitialContext.lookup(InitialContext.java:417)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$1.getFromJndi(JdbcDataSource.java:250)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:182)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:172)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource.getConnection(JdbcDataSource.java:463)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:309)
> ... 39 more
> 
> I've searched across the web for a solution but all i found did not work.
> It would be great if someone could help me out.
> 
> Thanks
> Per


Solr 5.5.0 Configure global jndi DS for dataimport

2017-02-07 Thread Per Newgro
Hello,

I would like to configure a JNDI datasource for use in dataimport. From the 
documentation it shall be possible and easy.

My environment:
Debian
OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-2~bpo8+1-b14)
Solr 5.5.0 downloaded and installed as service in /opt/solr
Installed core in /var/lib/solr/data/collection1

Solr is running and core can be managed.

Put into /opt/solr/server/lib
jetty-jndi-9.2.13.v20150730.jar
jetty-plus-9.2.13.v20150730.jar
Put into /opt/solr/server/lib/ext
sqljdbc4-4.0.jar

/opt/solr/server/etc/jetty.xml
...


jdbc/myds


jdbc:sqlserver://;databaseName=dbname;
user
password



...

/var/lib/solr/data/collection1/conf/db-data-config.xml




...

But all i get is an exception
Caused by: javax.naming.NameNotFoundException; remaining name 'jdbc/myds'
at 
org.eclipse.jetty.jndi.local.localContextRoot.lookup(localContextRoot.java:487)
at 
org.eclipse.jetty.jndi.local.localContextRoot.lookup(localContextRoot.java:533)
at javax.naming.InitialContext.lookup(InitialContext.java:417)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$1.getFromJndi(JdbcDataSource.java:250)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:182)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:172)
at 
org.apache.solr.handler.dataimport.JdbcDataSource.getConnection(JdbcDataSource.java:463)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:309)
... 39 more

I've searched across the web for a solution but all i found did not work.
It would be great if someone could help me out.

Thanks
Per


Caching multiple entities

2016-12-17 Thread Per Newgro

Hello,

we are implementing a questionnaire tool for companies. I would like to 
import the data using a DIH.


To increase performance i would like to use some caching. But my 
solution is not working. The score of my


questionnaire is empty. But there is a value in the database. I've 
checked that.


We can mark questionnaires for special purposes. I need to import the 
special mpc score. The mpc questionnaire


is not changing while importing. So i thaught i can can cache this value 
for usage in mpc_score queries.


Can you please help me, to find out what i'm doing wrong here?

Thanks

Per




processor="SqlEntityProcessor" 
cacheImpl="SortedMapBackedCache"

query="select qp.questionnaire AS ID
   from questionnaire_purposes qp
   join purposes p ON qp.id = p.id
   where p.name = 'mpc';">










Re: Use a constant entity in all imported documents of DIH

2014-12-13 Thread Per Newgro

Thanks Alexandre, it seems to be the way to go.
I do now

document
  entity name=bodyshop query=select id, customer_number from 
body_shops where state = 1

field name=id column=ID /
field name=customer_number column=CUSTOMER_NUMBER /
!-- load a constant entity
entity name=constant rootEntity=false 
processor=CachedSqlEntityProcessor query=select attr from constants 
where state = 1

  !-- use data of constant entity here --
  entity name=facility query=select name from facilities where 
body_shop_id=${bodyshop.ID} and constant_attr = ${constant.attr}

field name=name column=name /
  /entity
/entity
  /entity
/document

I use the CachedSqlEntityProcessor to avoid repeated load of my constant 
entity.


I hope that is a valid way. I will come back if i'm facing problems with 
this approach.

Thanks for your support
Per

Am 12.12.2014 um 22:40 schrieb Alexandre Rafalovitch:

Have a look at the documentation for the rootEntity attribute.
https://wiki.apache.org/solr/DataImportHandler

If you set it on the outer entity, I think it should give you what you
want with the nested entity structure. Then the outside entity will
load from the constant table and the inside from body_shops.

Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 12 December 2014 at 14:52, Per Newgro per.new...@gmx.ch wrote:

Do you mean with inner entity something like
 document
 entity name=bodyshop query=select id , customer_number
body_shops where state = 1
 field name=id column=ID /
 field name=customer_number column=CUSTOMER_NUMBER /
 entity name=facility query=select name from facilities where
body_shop_id=${bodyshop.ID}
 field name=name column=name /
 /entity
/entity
 /document

Yes that i could use. But i would use always the same entity in the where
clause of the sub-entity.
I would like to do something like

 !-- load a constant entity
 entity name = constant query=select attr from constants where state
= 1
 /entity
 document
 entity name=bodyshop query=select id , customer_number from
body_shops where state = 1
 field name=id column=ID /
 field name=customer_number column=CUSTOMER_NUMBER /
 !-- use data of constant entity here --
 entity name=facility query=select name from facilities where
body_shop_id=${bodyshop.ID} and constant_attr = ${constant.attr}
 field name=name column=name /
 /entity
/entity
 /document

I would like to avoid that repeating join that is required with the above
inner entity method above.

Thanks for your support
Per


Am 12.12.2014 um 18:22 schrieb Alexandre Rafalovitch:


Sounds like a case for nested entity definitions with the inner entity
being the one that's actually indexed? Just need to remember that all
the parent mapping is also applicable to all children.

Have you tried that?

Regards,
 Alex.


Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 12 December 2014 at 12:15, Per Newgro per.new...@gmx.ch wrote:

Hello,

i would like to load an entity before document import in DIH starts.
I want to use the entity id for a sub-select in the document entity.

Can i achieve something like that?

Thanks for helping me
Per






Use a constant entity in all imported documents of DIH

2014-12-12 Thread Per Newgro

Hello,

i would like to load an entity before document import in DIH starts.
I want to use the entity id for a sub-select in the document entity.

Can i achieve something like that?

Thanks for helping me
Per


Re: Use a constant entity in all imported documents of DIH

2014-12-12 Thread Per Newgro

Do you mean with inner entity something like
document
entity name=bodyshop query=select id , customer_number 
body_shops where state = 1

field name=id column=ID /
field name=customer_number column=CUSTOMER_NUMBER /
entity name=facility query=select name from facilities 
where body_shop_id=${bodyshop.ID}

field name=name column=name /
/entity
   /entity
/document

Yes that i could use. But i would use always the same entity in the 
where clause of the sub-entity.

I would like to do something like

!-- load a constant entity
entity name = constant query=select attr from constants where 
state = 1

/entity
document
entity name=bodyshop query=select id , customer_number from 
body_shops where state = 1

field name=id column=ID /
field name=customer_number column=CUSTOMER_NUMBER /
!-- use data of constant entity here --
entity name=facility query=select name from facilities 
where body_shop_id=${bodyshop.ID} and constant_attr = ${constant.attr}

field name=name column=name /
/entity
   /entity
/document

I would like to avoid that repeating join that is required with the 
above inner entity method above.


Thanks for your support
Per


Am 12.12.2014 um 18:22 schrieb Alexandre Rafalovitch:

Sounds like a case for nested entity definitions with the inner entity
being the one that's actually indexed? Just need to remember that all
the parent mapping is also applicable to all children.

Have you tried that?

Regards,
Alex.


Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 12 December 2014 at 12:15, Per Newgro per.new...@gmx.ch wrote:

Hello,

i would like to load an entity before document import in DIH starts.
I want to use the entity id for a sub-select in the document entity.

Can i achieve something like that?

Thanks for helping me
Per




Re: Solr is not responding on deployment in tomcat

2013-07-16 Thread Per Newgro

Thanks Eric,

i've configured both to use 8080 (For wicket this is standard :-)).

Do i have to assign a different port to solr if i use both webapps in 
the same container?

Btw. the contextpath for my wicket app is /*
Could that be a problem to?

Per

Am 15.07.2013 17:12, schrieb Erick Erickson:

Sounds like Wicket and Solr are using the same port(s)...

If you start Wicket first then look at the Solr logs, you might
see some message about port already in use or some such.

If this is SolrCloud, there are also the ZooKeeper ports to
wonder about.

Best
Erick

On Mon, Jul 15, 2013 at 6:49 AM, Per Newgro per.new...@gmx.ch wrote:

Hi,

maybe someone here can help me with my solr-4.3.1 issue.

I've successful deployed the solr.war on a tomcat7 instance.
Starting the tomcat with only the solr.war deployed - works nicely.
I can see the admin interface and logs are clean.

If i
deploy my wicket-spring-data-solr based app (using the HttpSolrServer)
after the solr app
without restarting the tomcat
= all is fine to.

I've implemented a ping to see if server is up.

code
 private void waitUntilSolrIsAvailable(int i) {
 if (i == 0) {
 logger.info(Check solr state...);
 }
 if (i  5) {
 throw new RuntimeException(Solr is not avaliable after 
more than 25 secs. Going down now.);
 }
 if (i  0) {
 try {
 logger.info(Wait for solr to get alive.);
 Thread.currentThread().wait(5000);
 } catch (InterruptedException e) {
 throw new RuntimeException(e);
 }
 }
 try {
 i++;
 SolrPingResponse r = solrServer.ping();
 if (r.getStatus()  0) {
 waitUntilSolrIsAvailable(i);
 }
 logger.info(Solr is alive.);
 } catch (SolrServerException | IOException e) {
 throw new RuntimeException(e);
 }
 }
/code

Here i can see log
log
54295 [localhost-startStop-2] INFO  org.apache.wicket.Application  – 
[wicket.project] init: Wicket extensions initializer
INFO  - 2013-07-15 12:07:45.261; 
de.company.service.SolrServerInitializationService; Check solr state...
54505 [localhost-startStop-2] INFO  
de.company.service.SolrServerInitializationService  – Check solr state...
INFO  - 2013-07-15 12:07:45.768; org.apache.solr.core.SolrCore; [collection1] 
webapp=/solr path=/admin/ping params={wt=javabinversion=2} hits=0 status=0 
QTime=20
55012 [http-bio-8080-exec-1] INFO  org.apache.solr.core.SolrCore  – [collection1] 
webapp=/solr path=/admin/ping params={wt=javabinversion=2} hits=0 status=0 
QTime=20
INFO  - 2013-07-15 12:07:45.770; org.apache.solr.core.SolrCore; [collection1] 
webapp=/solr path=/admin/ping params={wt=javabinversion=2} status=0 QTime=22
55014 [http-bio-8080-exec-1] INFO  org.apache.solr.core.SolrCore  – [collection1] 
webapp=/solr path=/admin/ping params={wt=javabinversion=2} status=0 QTime=22
INFO  - 2013-07-15 12:07:45.854; 
de.company.service.SolrServerInitializationService; Solr is alive.
55098 [localhost-startStop-2] INFO  
de.company.service.SolrServerInitializationService  – Solr is alive.
/log

But if i
restart the tomcat
with both webapps (solr and wicket)
the solr is not responding on the ping request.

log
INFO  - 2013-07-15 12:02:27.634; org.apache.wicket.Application; 
[wicket.project] init: Wicket extensions initializer
11932 [localhost-startStop-1] INFO  org.apache.wicket.Application  – 
[wicket.project] init: Wicket extensions initializer
INFO  - 2013-07-15 12:02:27.787; 
de.company.service.SolrServerInitializationService; Check solr state...
12085 [localhost-startStop-1] INFO  
de.company.service.SolrServerInitializationService  – Check solr state...
/log

What could that be or how can i get infos where this is stopping?

Thanks for your support
Per




Solr is not responding on deployment in tomcat

2013-07-15 Thread Per Newgro
Hi,

maybe someone here can help me with my solr-4.3.1 issue.

I've successful deployed the solr.war on a tomcat7 instance.
Starting the tomcat with only the solr.war deployed - works nicely.
I can see the admin interface and logs are clean.

If i
deploy my wicket-spring-data-solr based app (using the HttpSolrServer)
after the solr app
without restarting the tomcat
= all is fine to.

I've implemented a ping to see if server is up.

code
private void waitUntilSolrIsAvailable(int i) {
if (i == 0) {
logger.info(Check solr state...);
}
if (i  5) {
throw new RuntimeException(Solr is not avaliable after 
more than 25 secs. Going down now.);
}
if (i  0) {
try {
logger.info(Wait for solr to get alive.);
Thread.currentThread().wait(5000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
try {
i++;
SolrPingResponse r = solrServer.ping();
if (r.getStatus()  0) {
waitUntilSolrIsAvailable(i);
}
logger.info(Solr is alive.);
} catch (SolrServerException | IOException e) {
throw new RuntimeException(e);
}
}
/code

Here i can see log
log
54295 [localhost-startStop-2] INFO  org.apache.wicket.Application  – 
[wicket.project] init: Wicket extensions initializer
INFO  - 2013-07-15 12:07:45.261; 
de.company.service.SolrServerInitializationService; Check solr state...
54505 [localhost-startStop-2] INFO  
de.company.service.SolrServerInitializationService  – Check solr state...
INFO  - 2013-07-15 12:07:45.768; org.apache.solr.core.SolrCore; [collection1] 
webapp=/solr path=/admin/ping params={wt=javabinversion=2} hits=0 status=0 
QTime=20
55012 [http-bio-8080-exec-1] INFO  org.apache.solr.core.SolrCore  – 
[collection1] webapp=/solr path=/admin/ping params={wt=javabinversion=2} 
hits=0 status=0 QTime=20
INFO  - 2013-07-15 12:07:45.770; org.apache.solr.core.SolrCore; [collection1] 
webapp=/solr path=/admin/ping params={wt=javabinversion=2} status=0 QTime=22
55014 [http-bio-8080-exec-1] INFO  org.apache.solr.core.SolrCore  – 
[collection1] webapp=/solr path=/admin/ping params={wt=javabinversion=2} 
status=0 QTime=22
INFO  - 2013-07-15 12:07:45.854; 
de.company.service.SolrServerInitializationService; Solr is alive.
55098 [localhost-startStop-2] INFO  
de.company.service.SolrServerInitializationService  – Solr is alive.
/log

But if i
restart the tomcat
with both webapps (solr and wicket)
the solr is not responding on the ping request.

log
INFO  - 2013-07-15 12:02:27.634; org.apache.wicket.Application; 
[wicket.project] init: Wicket extensions initializer
11932 [localhost-startStop-1] INFO  org.apache.wicket.Application  – 
[wicket.project] init: Wicket extensions initializer
INFO  - 2013-07-15 12:02:27.787; 
de.company.service.SolrServerInitializationService; Check solr state...
12085 [localhost-startStop-1] INFO  
de.company.service.SolrServerInitializationService  – Check solr state...
/log

What could that be or how can i get infos where this is stopping?

Thanks for your support
Per


Re: Unable to getting started with SOLR

2011-11-10 Thread Per Newgro

Did you start the server (

*java -jar start.jar*

)? Was it successful? Have you checked the logs?

Am 10.11.2011 17:54, schrieb dsy99:

Hi all,
  Sorry for the in convenience caused if to anyone but I need reply for
following.

I want to work in Solr and for the same I downloaded it and started to
follow the instruction provided in the Tutorial available at
http://lucene.apache.org/solr/tutorial.html; to execute some examples
first.
but when I tried to check whether Solr is running or not bye using
http://localhost:8983/solr/admin/; in the web browser I found the following
message.
   I will be thankful if one can suggest some solution for it.

  Message:


 Unable to connect

   Firefox can't establish a connection to the server at localhost:8983.

  The site could be temporarily unavailable or too busy. Try again in a few
moments.
   If you are unable to load any pages, check your computer's network
connection.
   If your computer or network is protected by a firewall or proxy, make sure
that Firefox is permitted to access the Web.
_

With Regds:
Divakar

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Unable-to-getting-started-with-SOLR-tp3497276p3497276.html
Sent from the Solr - User mailing list archive at Nabble.com.





Re: Unable to getting started with SOLR

2011-11-10 Thread Per Newgro

Sounds strange. Did you do java -jar start.jar on the console?

Am 10.11.2011 18:19, schrieb dsy99:


Yes I executed the server start.jar embedded in example folder but not 
getting any message after that. I checked to logs also.it is empty.





On Thu, 10 Nov 2011 22:34:57 +0530  wrote



Did you start the server (




*java -jar start.jar*




)? Was it successful? Have you checked the logs?




Am 10.11.2011 17:54, schrieb dsy99:




Hi all,




  Sorry for the in convenience caused if to anyone but I need reply for




following.






I want to work in Solr and for the same I downloaded it and started to




follow the instruction provided in the Tutorial available at




http://lucene.apache.org/solr/tutorial.html; to execute some examples




first.




but when I tried to check whether Solr is running or not bye using




http://localhost:8983/solr/admin/; in the web browser I found the following




message.




I will be thankful if one can suggest some solution for it.






  Message:











   Unable to connect






  Firefox can't establish a connection to the server at localhost:8983.






  The site could be temporarily unavailable or too busy. Try again in a few




moments.




  If you are unable to load any pages, check your computer's network




connection.




  If your computer or network is protected by a firewall or proxy, make sure




that Firefox is permitted to access the Web.




_






With Regds:




Divakar






--




View this message in context: 
http://lucene.472066.n3.nabble.com/Unable-to-getting-started-with-SOLR-tp3497276p3497276.html
Sent from the Solr - User mailing list archive at Nabble.com.















If you reply to this email, your message will be added to the 
discussion below:


http://lucene.472066.n3.nabble.com/Unable-to-getting-started-with-SOLR-tp3497276p3497310.html







To unsubscribe from Unable to getting started with SOLR, click 
here.

See how NAML generates this email





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Unable-to-getting-started-with-SOLR-tp3497276p3497364.html
Sent from the Solr - User mailing list archive at Nabble.com.




Re: Search calendar avaliability

2011-10-27 Thread Per Newgro

what you is looking for is imho not releated to solr in special.
The topic should be solr as temporal database.
In your case if you have a timeline from 0 to 10 and you have two
documents from 1 to 6 and 5 to 13 you can get all documents within 0 - 10
by quering document.end = 0 and document.start = 10.
The greater or less equal depends on your definition of outside and inside
the interval. But beware the exchanged fields end and start.

Hth
Per

Am 27.10.2011 12:06, schrieb Anatoli Matuskova:

hello,
I want to filter search by calendar availability. For each document I know
the days which it is not available.
How could I build my fields filter the documents that are available in a
range of dates?
For example, a document A is available from 1-9-2011 to 5-9-2011 and is
available from 17-9-2011 to 22-9-2011 too (it's no available in the gap in
between)
If the filter query asks for avaliables from 2-9-2011 to 4-9-2011 docA would
be a match.
If the filter query for avaliables from 2-9-2011 to 20-9-2011 docA wouldn't
be a match as even the start and end are avaliables there's a gap of no
avaliability between them.
is this possible with Solr?

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Search-calendar-avaliability-tp3457203p3457203.html
Sent from the Solr - User mailing list archive at Nabble.com.





Re: How can i find a document by a special id?

2011-07-21 Thread Per Newgro
The problem is that i didn't store the mediacode in a field. Because the code 
is used frequently for getting the customer source.

So far i've found the solr.WordDelimiterFilterFactory which is (from Wiki) 
the way to go. The problem seems to be that i'm searching a longer string 
then i've indexed. I only index the numeric id (12345).
But query string is BR12345. I don't get any results. Can i fine-tune
the WDFF somehow? 

By using the admin/analysis.jsp 

Index Analyzer
org.apache.solr.analysis.StandardTokenizerFactory {luceneMatchVersion=LUCENE_33}
position1
term text   BR12345
startOffset 0
endOffset   7
typeALPHANUM
org.apache.solr.analysis.StopFilterFactory {words=stopwords.txt, 
ignoreCase=true, enablePositionIncrements=true, luceneMatchVersion=LUCENE_33}
position1
term text   BR12345
startOffset 0
endOffset   7
typeALPHANUM
org.apache.solr.analysis.WordDelimiterFilterFactory {splitOnCaseChange=1, 
generateNumberParts=1, catenateWords=1, luceneMatchVersion=LUCENE_33, 
generateWordParts=1, catenateAll=0, catenateNumbers=1}
position1   2
term text   BR  12345
startOffset 0   2
endOffset   2   7
typeALPHANUM  ALPHANUM
org.apache.solr.analysis.LowerCaseFilterFactory {luceneMatchVersion=LUCENE_33}
position1   2
term text   br  12345
startOffset 0   2
endOffset   2   7
typeALPHANUM  ALPHANUM
Query Analyzer
org.apache.solr.analysis.StandardTokenizerFactory {luceneMatchVersion=LUCENE_33}
position1
term text   BR12345
startOffset 0
endOffset   7
typeALPHANUM
org.apache.solr.analysis.StopFilterFactory {words=stopwords.txt, 
ignoreCase=true, enablePositionIncrements=true, luceneMatchVersion=LUCENE_33}
position1
term text   BR12345
startOffset 0
endOffset   7
typeALPHANUM
org.apache.solr.analysis.SynonymFilterFactory {synonyms=synonyms.txt, 
expand=true, ignoreCase=true, luceneMatchVersion=LUCENE_33}
position1
term text   BR12345
startOffset 0
endOffset   7
typeALPHANUM
org.apache.solr.analysis.WordDelimiterFilterFactory {splitOnCaseChange=1, 
generateNumberParts=1, catenateWords=0, luceneMatchVersion=LUCENE_33, 
generateWordParts=1, catenateAll=0, catenateNumbers=0}
position1   2
term text   BR  12345
startOffset 0   2
endOffset   2   7
typeALPHANUM  ALPHANUM
org.apache.solr.analysis.LowerCaseFilterFactory {luceneMatchVersion=LUCENE_33}
position1   2
term text   br  12345
startOffset 0   2
endOffset   2   7
typeALPHANUM  ALPHANUM

My field type is here

schema.xml
fieldType name=text_general class=solr.TextField 
positionIncrementGap=100
  analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.StopFilterFactory ignoreCase=true 
words=stopwords.txt enablePositionIncrements=true /
!-- in this example, we will only use synonyms at query time --
filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
ignoreCase=true expand=false/
filter class=solr.WordDelimiterFilterFactory generateWordParts=1 
generateNumberParts=1 catenateWords=1 catenateNumbers=1 catenateAll=0 
splitOnCaseChange=1/
filter class=solr.LowerCaseFilterFactory/
  /analyzer
  analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.StopFilterFactory ignoreCase=true 
words=stopwords.txt enablePositionIncrements=true /
filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
ignoreCase=true expand=true/
filter class=solr.WordDelimiterFilterFactory 
generateWordParts=1 generateNumberParts=1 catenateWords=0 
catenateNumbers=1 catenateAll=1 splitOnCaseChange=1/
filter class=solr.LowerCaseFilterFactory/
  /analyzer
/fieldType


Thanks
Per

 Original-Nachricht 
 Datum: Wed, 20 Jul 2011 17:03:40 -0400
 Von: Bill Bell billnb...@gmail.com
 An: solr-user@lucene.apache.org solr-user@lucene.apache.org
 CC: solr-user@lucene.apache.org solr-user@lucene.apache.org
 Betreff: Re: How can i find a document by a special id?

 Why not just search the 2 fields?
 
 q=*:*fq=mediacode:AB OR id:123456
 
 You could take the user input and replace it:
 
 q=*:*fq=mediacode:$input OR id:$input
 
 Of course you can also use dismax and wrap with an OR.
 
 Bill Bell
 Sent from mobile
 
 
 On Jul 20, 2011, at 3:38 PM, Chris Hostetter hossman_luc...@fucit.org
 wrote:
 
  
  : Am 20.07.2011 19:23, schrieb Kyle Lee:
  :  Is the mediacode always alphabetic, and is the ID always numeric?
  :  
  : No sadly not. We expose our products on too many medias :-).
  
  If i'm understanding you correctly, you're saying even the prefix AB
 is 
  not special, that there could be any number of prefixes identifying 
  differnet mediacodes ? and the product ids aren't all numeric?
  
  your question seems 

Re: How can i find a document by a special id?

2011-07-21 Thread Per Newgro
Sorry for being that stupid. I've modified the wrong schema.

So the solr.WordDelimiterFilterFactory works as expected and solved my 
problem. I've added the line
filter class=solr.WordDelimiterFilterFactory generateWordParts=0 
generateNumberParts=1 catenateWords=0 catenateNumbers=0 catenateAll=0 
splitOnCaseChange=0 splitOnNumericChange=1/

to my schema and test is green.

Thanks all for helping me
Per


 Original-Nachricht 
 Datum: Thu, 21 Jul 2011 09:53:23 +0200
 Von: Per Newgro per.new...@gmx.ch
 An: solr-user@lucene.apache.org
 Betreff: Re: How can i find a document by a special id?

 The problem is that i didn't store the mediacode in a field. Because the
 code is used frequently for getting the customer source.
 
 So far i've found the solr.WordDelimiterFilterFactory which is (from
 Wiki) the way to go. The problem seems to be that i'm searching a longer
 string then i've indexed. I only index the numeric id (12345).
 But query string is BR12345. I don't get any results. Can i fine-tune
 the WDFF somehow? 
 
 By using the admin/analysis.jsp 
 
 Index Analyzer
 org.apache.solr.analysis.StandardTokenizerFactory
 {luceneMatchVersion=LUCENE_33}
 position  1
 term text BR12345
 startOffset   0
 endOffset 7
 type  ALPHANUM
 org.apache.solr.analysis.StopFilterFactory {words=stopwords.txt,
 ignoreCase=true, enablePositionIncrements=true, luceneMatchVersion=LUCENE_33}
 position  1
 term text BR12345
 startOffset   0
 endOffset 7
 type  ALPHANUM
 org.apache.solr.analysis.WordDelimiterFilterFactory {splitOnCaseChange=1,
 generateNumberParts=1, catenateWords=1, luceneMatchVersion=LUCENE_33,
 generateWordParts=1, catenateAll=0, catenateNumbers=1}
 position  1   2
 term text BR  12345
 startOffset   0   2
 endOffset 2   7
 type  ALPHANUM  ALPHANUM
 org.apache.solr.analysis.LowerCaseFilterFactory
 {luceneMatchVersion=LUCENE_33}
 position  1   2
 term text br  12345
 startOffset   0   2
 endOffset 2   7
 type  ALPHANUM  ALPHANUM
 Query Analyzer
 org.apache.solr.analysis.StandardTokenizerFactory
 {luceneMatchVersion=LUCENE_33}
 position  1
 term text BR12345
 startOffset   0
 endOffset 7
 type  ALPHANUM
 org.apache.solr.analysis.StopFilterFactory {words=stopwords.txt,
 ignoreCase=true, enablePositionIncrements=true, luceneMatchVersion=LUCENE_33}
 position  1
 term text BR12345
 startOffset   0
 endOffset 7
 type  ALPHANUM
 org.apache.solr.analysis.SynonymFilterFactory {synonyms=synonyms.txt,
 expand=true, ignoreCase=true, luceneMatchVersion=LUCENE_33}
 position  1
 term text BR12345
 startOffset   0
 endOffset 7
 type  ALPHANUM
 org.apache.solr.analysis.WordDelimiterFilterFactory {splitOnCaseChange=1,
 generateNumberParts=1, catenateWords=0, luceneMatchVersion=LUCENE_33,
 generateWordParts=1, catenateAll=0, catenateNumbers=0}
 position  1   2
 term text BR  12345
 startOffset   0   2
 endOffset 2   7
 type  ALPHANUM  ALPHANUM
 org.apache.solr.analysis.LowerCaseFilterFactory
 {luceneMatchVersion=LUCENE_33}
 position  1   2
 term text br  12345
 startOffset   0   2
 endOffset 2   7
 type  ALPHANUM  ALPHANUM
 
 My field type is here
 
 schema.xml
 fieldType name=text_general class=solr.TextField
 positionIncrementGap=100
   analyzer type=index
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.StopFilterFactory ignoreCase=true
 words=stopwords.txt enablePositionIncrements=true /
 !-- in this example, we will only use synonyms at query time --
 filter class=solr.SynonymFilterFactory synonyms=synonyms.txt
 ignoreCase=true expand=false/
 filter class=solr.WordDelimiterFilterFactory
 generateWordParts=1 generateNumberParts=1 catenateWords=1 
 catenateNumbers=1
 catenateAll=0 splitOnCaseChange=1/
 filter class=solr.LowerCaseFilterFactory/
   /analyzer
   analyzer type=query
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.StopFilterFactory ignoreCase=true
 words=stopwords.txt enablePositionIncrements=true /
 filter class=solr.SynonymFilterFactory synonyms=synonyms.txt
 ignoreCase=true expand=true/
   filter class=solr.WordDelimiterFilterFactory 
 generateWordParts=1
 generateNumberParts=1 catenateWords=0 catenateNumbers=1 catenateAll=1
 splitOnCaseChange=1/
 filter class=solr.LowerCaseFilterFactory/
   /analyzer
 /fieldType
 
 
 Thanks
 Per
 
  Original-Nachricht 
  Datum: Wed, 20 Jul 2011 17:03:40 -0400
  Von: Bill Bell billnb...@gmail.com
  An: solr-user@lucene.apache.org solr-user@lucene.apache.org
  CC: solr-user@lucene.apache.org solr-user@lucene.apache.org
  Betreff: Re: How can i find a document by a special id?
 
  Why not just search the 2 fields?
  
  q=*:*fq=mediacode:AB OR id:123456
  
  You could take the user input and replace it:
  
  q

How can i find a document by a special id?

2011-07-20 Thread Per Newgro

Hi,

i'm new to solr. I built an application using the standard solr 3.3 
examples as default.
My id field is a string and is copied to a solr.TextField (searchtext) 
for search queries.

All works fine except i try to get documents by a special id.

Let me explain the detail's. Assume id = 1234567. I would like to 
query this document
by using q=searchtext:AB1234567. The prefix (AB) is acting as a 
pseudo-id in our
system. Users know and search for it. But it's not findable because 
solr-index only knows

the short id.

Adding a new document with the prefixed-id as id is not an option. Then 
i have to add

many documents.

For my understanding stemming and ngram tokenizing is not possible
because they act on tokens longer then the search token.

How can i do this?

Thanks
Per


Re: How can i find a document by a special id?

2011-07-20 Thread Per Newgro

Am 20.07.2011 18:03, schrieb Kyle Lee:

Perhaps I'm missing something, but if your fields are indexed as 1234567
but users are searching for AB1234567, is it not possible simply to strip
the prefix from the user's input before sending the request?

On Wed, Jul 20, 2011 at 10:57 AM, Per Newgroper.new...@gmx.ch  wrote:


Hi,

i'm new to solr. I built an application using the standard solr 3.3
examples as default.
My id field is a string and is copied to a solr.TextField (searchtext)
for search queries.
All works fine except i try to get documents by a special id.

Let me explain the detail's. Assume id = 1234567. I would like to query
this document
by using q=searchtext:AB1234567. The prefix (AB) is acting as a pseudo-id
in our
system. Users know and search for it. But it's not findable because
solr-index only knows
the short id.

Adding a new document with the prefixed-id as id is not an option. Then i
have to add
many documents.

For my understanding stemming and ngram tokenizing is not possible
because they act on tokens longer then the search token.

How can i do this?

Thanks
Per

Sorry for being not clear here. I only use a single search field. It can 
contain multiple search words.

One of them is the id. So i don't realy know that the search word is an id.
The usecase is: We have a product database with some items. The product 
has an id, name, features
etc. They all go in the described serachtext field. We promote our 
products in different medias. So every
product can have a mediaid (AB is mediacode 1234567 is the id). And 
users should be able to find

the product by id and mediaid.

I hope i could explain myself better.

Thanks for helping me
Per


Re: How can i find a document by a special id?

2011-07-20 Thread Per Newgro

Am 20.07.2011 19:23, schrieb Kyle Lee:

Is the mediacode always alphabetic, and is the ID always numeric?


No sadly not. We expose our products on too many medias :-).

Per


Re: Is solrj 3.3.0 ready for field collapsing?

2011-07-05 Thread Per Newgro

Thanks for your response.

Am 05.07.2011 13:53, schrieb Erick Erickson:

Let's see the results of addingdebugQuery=on to your URL. Are you getting
any documents back at all? If not, then your query isn't getting any
documents to group.
I didn't get any docs back. But they have been in the response (I saw 
them in debugger).
But the structure had changed so that DocumentBuilder didn't brought me 
any results (getBeans()).
I investigated a bit further and found out that i had to set the 
|*group_main param to true. 
https://builds.apache.org/job/Solr-trunk/javadoc/org/apache/solr/common/params/GroupParams.html#GROUP_MAIN

Now is get results. So the answer seems to be yes :-).
*|


You haven't told us much about what you're trying to do, you might want to
review: http://wiki.apache.org/solr/UsingMailingLists

Sorry for that.


Best
Erick
On Jul 4, 2011 11:55 AM, Per Newgroper.new...@gmx.ch  wrote:


Cheers
Per


Is solrj 3.3.0 ready for field collapsing?

2011-07-04 Thread Per Newgro

Hi,

i've tried to add the params for group=true and group.field=myfield by 
using the SolrQuery.
But the result is null. Do i have to configure something? In wiki part 
for field collapsing i couldn't

find anything.

Thanks
Per