RE: open files under linux

2004-02-20 Thread Stephen Eaton
Easiest way is to use sysctl to view and change the max files setting. For
some reason the max-files is set to 8000 or something small (is for mandrake
anyway)

Sysctl fs.file-nr to view what is currently in use and what the max is set
for.  It reports file usage in the format of xxx yyy zzz  where xxx= max
that has been used by the system, yyy=currently being used, zzz=max
allocated.  So yyy should never get near zzz, if it does then you will get
the out of file errors.  Try using the command when you are getting the
issue and see what the system values are. 

To change use
Sysctl -w fs.file-max=32768  to give it something decent.

Should solve your problems.

Stephen...


 -Original Message-
 From: Morus Walter [mailto:[EMAIL PROTECTED] 
 Sent: Friday, 20 February 2004 7:41 PM
 To: Lucene Users List
 Subject: open files under linux
 
 Rasik Pandey writes:
  
  As a side note, regarding the Too many open files issue, 
 has anyone noticed that this could be related to the JVM? For 
 instance, I have a coworker who tried to run a number of 
 optimized indexes in a JVM instance and a received the Too 
 many open files error. With the same number of available 
 file descriptors (on linux ulimit = ulimited), he split the 
 number of indicies over too JVM instances his problem 
 disappeared.  He also tested the problem by increasing the 
 available memory to the JVM instance, via the -Xmx parameter, 
 with all indicies running in one JVM instance and again the 
 problem disappeared. I think the issue deserves more testing 
 to pin-point the exact problem, but I was just wondering if 
 anyone has already experienced anything similar or if this 
 information could be of use to anyone, in which case we 
 should probably start a new thread dedicated to this issue.
  
 The limit is per process. Two JVM make two processes.
 (There's a per system limit too, but it's much higher; I 
 think you find it in /proc/sys/fs/file-max and it's default 
 value depends on the amount of memory the system has)
 
 AFAIK there's no way of setting openfiles to unlimited. At 
 least neither bash nor tcsh accepts that.
 But it should not be a problem to set it to very high values.
 And you should be able to increase the system wide limit by 
 writing to /proc/sys/fs/file-max as long as you have enough memory.
 
 I never used this, though.
 
 Morus
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 __ NOD32 1.628 (20040218) Information __
 
 This message was checked by NOD32 antivirus system.
 http://www.nod32.com
 
 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Lucene with Postgres db

2004-01-31 Thread Stephen Eaton
What I do is do a db select and build up a result set of all the
documents/fields I need to search then I index them.  This is usually
performed once a day and does so on all db records.  I'm currently using
turbine and torque to do this. 

 -Original Message-
 From: Ankur Goel [mailto:[EMAIL PROTECTED] 
 Sent: Saturday, 31 January 2004 8:10 PM
 To: [EMAIL PROTECTED]
 Subject: Lucene with Postgres db
 
 Hi,
 
 I have to search the documents which are stored in postgres db. 
 
 Can someone give a clue how to go about it?
 
 Thanks
 
 Ankur Goel
 Brickred Technologies
 B-2 IInd Floor, Sector-31
 Noida,India
 P:+91-1202456361
 C:+91-9810161323
 E:[EMAIL PROTECTED]
 http://www.brickred.com
  
 
 
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 __ NOD32 1.614 (20040129) Information __
 
 This message was checked by NOD32 antivirus system.
 http://www.nod32.com
 
 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Postgres and lucene

2003-07-07 Thread Stephen Eaton
What I am doing through the index process is basically dump the database via
a select all statement.

Once selected the record sets are looped through and the relevant fields as
well as the records key are indexed, so then when  I need to retrieve the
data I do a select on teh relevant record based on the key.

Stephen...

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, 8 July 2003 12:38 AM
 To: Lucene Users List
 Subject: Postgres and lucene


 Hi,
   I'm new to lucene and I have had a lot of trouble finding
 information
 on how exactly to use lucene to search a postgres database. I've
 searched the archives for this list, but found nothing
 specific enough
 to help me. Has anyone used Lucene to search a postgres database who
 could help?

 Thanks,
Jessica

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: optimize()

2002-11-26 Thread Stephen Eaton
I don't know if this answers your question, but I had alot of problems
with lucene bombing out with out of memory errors.  I was not using the
optimize, I tried this and hey presto no more problems.

-Original Message-
From: Leo Galambos [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, 27 November 2002 5:22 AM
To: [EMAIL PROTECTED]
Subject: optimize()


How does it affect overall performance, when I do not call optimize()?

THX

-g-



--
To unsubscribe, e-mail:
mailto:[EMAIL PROTECTED]
For additional commands, e-mail:
mailto:[EMAIL PROTECTED]


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]