Re: PHP-Lucene Integration

2005-02-06 Thread Maurits van Wijland
Hi Owen,
This can easily be done! Simply install tomcat on port 8080 and create a 
jk2 or proxy that points to tomcat. then all requests for jsps can be 
send to tomcat. The search engine can even be placed on a separate 
server. If you give me some details on your server, i will create a 
proxy script for your apache!

regards,
Maurits
Owen Densmore wrote:
I'm building a lucene project for a client who uses php for their 
dynamic web pages.  It would be possible to add servlets to their 
environment easily enough (they use apache) but I'd like to have 
minimal impact on their IT group.

There appears to be a php java extension that lets php call back & 
forth to java classes, but I thought I'd ask here if anyone has had 
success using lucene from php.

Note: I looked in the Lucene In Action search page, and yup, I bought 
the book and love it!  No examples there tho.  The list archives 
mention that using java lucene from php is the way to go, without 
saying how.  There's mention of a lucene server and a php interface to 
that.  And some similar comments.  But I'm a bit surprised there's not 
a bit more in terms of use of the official java extension to php.

Thanks for the great package!
Owen
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Summarization; sentence-level and document-level filters.

2003-12-17 Thread maurits van wijland
Gregor,

I don't have any benchmarks for summarization. Sorry!
I have two testversions of commercial summarizers and
their performance is better than the Classifier4J, but these
have been written in C++. So you can't compare properly.

regards,
Maurits


- Original Message - 
From: "Gregor Heinrich" <[EMAIL PROTECTED]>
To: "'Lucene Users List'" <[EMAIL PROTECTED]>
Sent: Tuesday, December 16, 2003 9:35 PM
Subject: RE: Summarization; sentence-level and document-level filters.


> Maurits: thanks for the hint to classifier4j -- I have had a look on this
> package and tried the SimpleSummarizer and it seems to work fine.
(However,
> as I don't know the benchmarks for summarization, I'm not the one to
judge.)
>
> Do you have experience with it?
>
> Gregor
>
> -Original Message-
> From: maurits van wijland [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, December 16, 2003 1:09 AM
> To: Lucene Users List; [EMAIL PROTECTED]
> Subject: Re: Summarization; sentence-level and document-level filters.
>
>
> Hi Gregor,
>
> Sofar as I know there is no summarizer in the plans. And maybe I can help
> you along the way. Have a look
> at Classifier4J project on Sourceforge.
>
> http://classifier4j.sourceforge.net/
>
> It has a small documetn summarizer besides a bayes classifier.It might
speed
> up your coding.
>
> On the level of lucene, I have no idea. My gut feeling says that a summary
> should be build before the
> text is tokenized! The tokenizer can ofcourse be used when analysing a
> document, but hooking into
> the lucene indexing is a bad idea I think.
>
> Someone else has any ideas?
>
> regards,
>
> Maurits
>
>
>
>
> - Original Message -
> From: "Gregor Heinrich" <[EMAIL PROTECTED]>
> To: "'Lucene Users List'" <[EMAIL PROTECTED]>
> Sent: Monday, December 15, 2003 7:41 PM
> Subject: Summarization; sentence-level and document-level filters.
>
>
> > Hi,
> >
> > is there any possibility to do sentence-level or document level analysis
> > with the current Analysis/TokenStream architecture? Or where else is the
> > best place to plug in customised document-level and sentence-level
> analysis
> > features? Is there any "precedence case" ?
> >
> > My technical problem:
> >
> > I'd like to include a summarization feature into my system, which should
> (1)
> > best make use of the architecture already there in Lucene, and (2)
should
> be
> > able to trigger summarization on a per-document basis while requiring
> > sentence-level information, such as full-stops and commas. To preserve
> this
> > "punctuation", a special Tokenizer can be used that outputs such
landmarks
> > as tokens instead of filtering them out. The actual SummaryFilter then
> > filters out the punctuation for its successors in the Analyzer's filter
> > chain.
> >
> > The other, more complex thing is the document-level information: As
> Lucene's
> > architecture uses a filter concept that does not know about the document
> the
> > tokens are generated from (which is good abstraction), a
document-specific
> > operation like summarization is a bit of an awkward thing with this (and
> > originally not intended, I guess). On the other hand, I'd like to have
the
> > existing filter structure in place for preprocessing of the input,
because
> > my raw texts are generated by converters from other formats that output
> > unwanted chars (from figures, pagenumbers, etc.), which are filtered out
> > anyway by my custom Analyzer.
> >
> > Any idea how to solve this second problem? Is there any support for such
> > document / sentence structure analysis planned?
> >
> > Thanks and regards,
> >
> > Gregor
> >
> >
> >
> > -
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> >
>
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Summarization; sentence-level and document-level filters.

2003-12-15 Thread maurits van wijland
Hi Gregor,

Sofar as I know there is no summarizer in the plans. And maybe I can help
you along the way. Have a look
at Classifier4J project on Sourceforge.

http://classifier4j.sourceforge.net/

It has a small documetn summarizer besides a bayes classifier.It might speed
up your coding.

On the level of lucene, I have no idea. My gut feeling says that a summary
should be build before the
text is tokenized! The tokenizer can ofcourse be used when analysing a
document, but hooking into
the lucene indexing is a bad idea I think.

Someone else has any ideas?

regards,

Maurits




- Original Message - 
From: "Gregor Heinrich" <[EMAIL PROTECTED]>
To: "'Lucene Users List'" <[EMAIL PROTECTED]>
Sent: Monday, December 15, 2003 7:41 PM
Subject: Summarization; sentence-level and document-level filters.


> Hi,
>
> is there any possibility to do sentence-level or document level analysis
> with the current Analysis/TokenStream architecture? Or where else is the
> best place to plug in customised document-level and sentence-level
analysis
> features? Is there any "precedence case" ?
>
> My technical problem:
>
> I'd like to include a summarization feature into my system, which should
(1)
> best make use of the architecture already there in Lucene, and (2) should
be
> able to trigger summarization on a per-document basis while requiring
> sentence-level information, such as full-stops and commas. To preserve
this
> "punctuation", a special Tokenizer can be used that outputs such landmarks
> as tokens instead of filtering them out. The actual SummaryFilter then
> filters out the punctuation for its successors in the Analyzer's filter
> chain.
>
> The other, more complex thing is the document-level information: As
Lucene's
> architecture uses a filter concept that does not know about the document
the
> tokens are generated from (which is good abstraction), a document-specific
> operation like summarization is a bit of an awkward thing with this (and
> originally not intended, I guess). On the other hand, I'd like to have the
> existing filter structure in place for preprocessing of the input, because
> my raw texts are generated by converters from other formats that output
> unwanted chars (from figures, pagenumbers, etc.), which are filtered out
> anyway by my custom Analyzer.
>
> Any idea how to solve this second problem? Is there any support for such
> document / sentence structure analysis planned?
>
> Thanks and regards,
>
> Gregor
>
>
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Document Clustering

2003-11-11 Thread maurits van wijland
Hi All and Marc,

There is the carrot project :
http://www.cs.put.poznan.pl/dweiss/carrot/

The carrot system consists of webservices that can easily be fed by a lucene
resultlist. You simply have to create a JSP that creates this XML file and
create a custom process and input component. The input component
for lucene could look like:


http://www.dawidweiss.com/projects/carrot/componentDescriptor"; framework  =
"Carrot2">
http://localhost/weblucene/c2.jsp";
   infoURL  = "http://localhost/weblucene/";
/>


The c2.jsp file simply has to translate a resultlist into an XLM file such
as:


 ...
 1.0
 http://...
 sum 1
 snip 2


 ...
 1.0
 http://...
 sum 2
 snip 2



Feed this into the carrot system, and you will get a nice clustered
result list. The amazing part is of this clustering mechanism is that
the cluster labels are incredible, their great!

Then there is a open source project called Classifier4J that can
be used for classification, the oposite of clustering. These other
open source projects are a great addition to the Lucene system.

I hope this helps...

Marc, what are you building?? Maybe we can help!

Kind regards,

Maurits


- Original Message - 
From: "marc" <[EMAIL PROTECTED]>
To: "Lucene Users List" <[EMAIL PROTECTED]>
Sent: Tuesday, November 11, 2003 5:15 PM
Subject: Document Clustering


Hi,

does anyone have any sample code/documentation available for doing document
based clustering using lucene?

Thanks,
Marc



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: problems extracting documents using Terms

2003-07-22 Thread maurits van wijland
Maurice,

Please look at the tool Luke,

http://www.getopt.org/luke

And that can help you see into your index.
Maybe there is some trouble with spaces
or trimming of strings...but Luke can help
you there!

Good luck,

Maurits


- Original Message - 
From: "Maurice Coyle" <[EMAIL PROTECTED]>
To: "Lucene Users List" <[EMAIL PROTECTED]>
Sent: Monday, July 21, 2003 9:18 PM
Subject: problems extracting documents using Terms


> hi all,
>
> i'm trying to extract a document number from my lucene index using (where
> read is an IndexReader)
> TermDocs td = read.termDocs(new Term("url", someurl));
>
> This call never returns any results, despite the fact that i definitely
know
> that a document with the url field equal to someurl exists in the index
> (because i know the document in the index's document number and the code:
>
> Document d=read.document(docnum);
> System.out.println(d.get("url"));
>
> outputs the String someurl).
>
> i don't know what's going on, hopefully i've included enough information
for
> someone to spot what's up.
>
> thanks very much,
> maurice
>
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Regarding Setup Lucine for my site

2003-03-05 Thread maurits van wijland
Catalin,
could you send me a zip file with your implementation?

Thanks,

maurits
- Original Message -
From: "Catalin" <[EMAIL PROTECTED]>
To: "Lucene Users List" <[EMAIL PROTECTED]>
Sent: Wednesday, March 05, 2003 10:26 AM
Subject: Re: Regarding Setup Lucine for my site


hi there !
we have almost the same configuration (site, index, paths, etc) like you.
we used for our search on the site another approach.

eg: use a small crawler to index some feeded urls,
make the lucene index, make the web search app to use that index.

for crawling:
http://cvs.cabanova.ro/viewcvs.cgi/indexer/

for webapp:
http://cvs.cabanova.ro/viewcvs.cgi/wsearch/

running online:
http://www.anet.ro/search?query=star+wars

the code of the indexer is based on i2a websearch application demo
that is listed on lucene jakarta site.

take a look, maybe you might find something usefull !
there is no .zip available for download.
but if somebody requests the .zip
we can put it online.

have fun !

Catalin

  - Original Message -
  From: Samuel Alfonso Velázquez Díaz
  To: Lucene Users List
  Sent: Wednesday, March 05, 2003 3:16 AM
  Subject: Re: Regarding Setup Lucine for my site



  Yes I have
  1.- The directory with the files to index:
  C:/filesToIndex/www/

  2.- A path where the index files from the search engine will be created,
lets say
  C:/index/
  3.- I have an internet domain whose name is: www.mysite.com
  4.- A web application context that runs at http://www.mysite.com/search

  Once I have set all the above things I want to be able to use the search
aplication:
  http://www.mysite.com/search/search.jsp
  And I dont want that the results that I get from the index (step 2) give
me results like
  Your file is at
  C:/filesToIndex/www/some_html/my_doc.html
  The results should be:
  Your file is at
  http://www.mysite.com/some_html/my_doc.html
  For the comments I have read (THANK YOU VERY MUTCH) I conclude that there
is no way to generate the index with some custom prefix (as
http://www.mysite.com/ for the documents at C:/filesToIndex/www/).
  It seems that I have to modify my web application
(http://www.mysite.com/search/search.jsp) to include some logic to repalce
"C:/filesToIndex/www/" to "http://www.mysite.com/";.
  If you could point me to the source code of lucene to include this logic
and this way fix it once and for all, will appreciate a lot.
  The command I used to generate this index was:
  java org.apache.lucene.demo.IndexHTML -create -index index C:\index
C:\filesToIndex\ www\
  Now in the web application I have to modify
IndexSearcher searcher;
Query query;
Hits hits;

// some code after...
   hits = searcher.search(query);

for ( /* search through the hit list*/)

Document doc = hits.doc(i);
String doctitle = doc.get("title");
String url = doc.get("url");

  I have to do some thing like url = "http://www.mysite.com/"; +
url.substring("C:/filesToIndex/www/".length);

  Regards!!!
  And thanks again
   Pinky Iyer <[EMAIL PROTECTED]> wrote:
  I dont understand the explanantion. When I try and index the documents as
mentioned in the examples, and then when i run the app and do a sample
search, it does point to the directory structure say "c:/filesToIndex/www/"
instead of "http://localhost:8080/www/";. So how can this be changed to
reflect the website domain as mentioned by you. Could you explain again. Say
my docs are under a directory c:/filesToIndex/www/ and the wesite is as you
said http://localhost:8080/ , then how to proceed!
  Thanks in advance!
  Samuel Alfonso Velázquez Díaz wrote:
  Oh ok, I thougth it was going to be some thing like the egothor search
engine (A java based search engine). When you create the Index, you issue a
command like:
  java org.egothor.indexer.mirror.DoTanker /tmp/my_www
Project/Egothor/var/www as http://localhost:8080
  /thmp/my_www: Is the path to the directory where the index is to be
created
  Project/Egothor/var/www: is the path to the local file system files to be
indexed.
  and as http://localhost:8080 is the prefix that the index will keep on the
hit list. This way the index will be relative to http://localhost:8080. Even
if your production site may be an other site.
  Thanks for your comments, any way now I know that I have to modify code to
do this.
  Regards!
  Jeff Linwood wrote:Hi,

  I'm not a hundred percent sure I understand what you are asking, but when
  you get the results back from Lucene (the hits) it's up to you to format
  them to display on a web page - you can always do the modification there
  when you display the links to the results.

  Jeff
  - Original Message -
  From: "Samuel Alfonso Velázquez Díaz"
  To: "Lucene Users List"
  Sent: Tuesday, March 04, 2003 11:33 AM
  Subject: Regarding Setup Lucine for my site


  >
  > The documentation says:
  >
  > Once you've gotten this far you're probably itching to go. Let's start
by
  creating the index you'll 

Re: Custom summary

2003-02-04 Thread maurits van wijland
Marco, Otis,

Check the user and development list, because there has been a post on a
summary.
I believe it was combined with a term highlighter.

regards,

maurits.

- Original Message -
From: "Otis Gospodnetic" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "marco scibetta" <[EMAIL PROTECTED]>
Sent: Tuesday, February 04, 2003 4:27 PM
Subject: Re: Custom summary


> Redirecting to lucene-user.
>
> No, not out of the box.  You have to get the whole field of a document
> and look for the pieces of query in it yourself.
>
> Otis
>
> --- marco scibetta <[EMAIL PROTECTED]> wrote:
> > hello,
> >
> > I'm wondering if is possible to retrieve the part of the document in
> > which the word/words contained in the queury are present instead of
> > the
> > first n characters.
> >
> > marco scibetta
> >
> >
> > -
> > Yahoo! Cellulari: loghi, suonerie, picture message per il tuo
> telefonino
>
>
> __
> Do you Yahoo!?
> Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
> http://mailplus.yahoo.com
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Language identifier, stemmers and analyzers

2002-11-17 Thread maurits van wijland
Hi there,
this is a cross post. I first send this to the developers list, but some how
no response yet. Maybe here, there is someone that can help me!

I am hoping to improve Lucene and add a strategy for multi lingual
support. We already have stemmers for almost all european languages,
now, I think this is the next step.

Any thoughts, please??

Maurits


> Dear all,
>
> Brad Wellington has created a language identifier which can be used in
> combination with
> the snowball stemmers donated to Lucene by Alex Murzaku. I have currently
> build a solid language model for use with the language identifier for the
> languages: Danish, Dutch, English, Finnish, French, German, Italian,
> Norwegian, Portuguese, Spanish and Swedisch.
>
> The language identifier is based on a Naive Bayes classifier. Now, this is
> all nice, but I have some integration questions, and I hope you can help
> out.
>
> Basically, the process of indexing is:
> Create an analyzer
> Open a IndexWriter
> Pass it the analyzer
> Proces a document
> Add document to Index
> Optimize writer
> Close writer
>
> Now, the language identifier can help automatically identify what langauge
a
> document is written in. Based on the suggestion of the identifier, an
> apropriate analyzer can be selected.
>
> This is al great, but...
>
> 1. Do we index all the terms from various documents in various languages
> into 1 index?
> 2. Do I build a specialised Analyzer that selects the stemmer based on the
> Language Identifier or leave that up to the custom indexing application?
>
> Your thoughts please...
>
> regards,
>
> Maurits
>
>
>


--
To unsubscribe, e-mail:   
For additional commands, e-mail: 




Re: Concurency in Lucene

2002-10-16 Thread Maurits van Wijland

Hi Kiril,

>
> I wanted to see how much interest is out there for such a solution and
> whether Lucene developers feel that this should be part of Lucene. If
there
> is enough interest I would like to donate this code to Lucene.

I think that this is a very good addition to Lucene. In case th developers
group doesn't think so, please consider sharing this code with us Users,
because these are some features that lack from Lucene.

We all face these problems and have our work arounds, I use a staging
server where ALL documents are indexed and then a new index is publkished.
I would love to have some sort of transactions and less problems with
crashes.

So, Kiril, let us in on it, please.

regards,
maurits.


--
To unsubscribe, e-mail:   
For additional commands, e-mail: 




Iterate the index by document

2002-10-05 Thread Maurits van Wijland

Hi all,

Is there anyway one can iterate through the index
and retrieve all the documents?

Thanks anyone...

regards,

maurits.



- Original Message -
From: "Otis Gospodnetic" <[EMAIL PROTECTED]>
To: "Lucene Users List" <[EMAIL PROTECTED]>
Sent: Saturday, October 05, 2002 5:46 PM
Subject: Re: Problem With org.apache.lucene.demo.IndexHTML class on Sun
Solaris


> You can try giving java more memory.
> Run this to see what options you need to specify: java -help.
>
> Otis
>
>
>
> --- Ravi Kothiyal <[EMAIL PROTECTED]> wrote:
> > Dear Friends ,
> >
> > I am using lucene-1.2 . When I am trying to create a html index from
> > java org.apache.lucene.demo.IndexHTML -create -index /opt/index
> > /webdev
> >
> > It start creating index but after some time it gives Memory outof
> > range exception and quit the command . Can you please help me in this
> > matter.
> >
> > Best Regards
> >
> > Ravi
> > --
> > __
> > Sign-up for your own FREE Personalized E-mail at Mail.com
> > http://www.mail.com/?sr=signup
> >
> > "Free price comparison tool gives you the best prices and cash back!"
> > http://www.bestbuyfinder.com/download.htm
> >
> >
> > --
> > To unsubscribe, e-mail:
> > 
> > For additional commands, e-mail:
> > 
> >
>
>
> __
> Do you Yahoo!?
> Faith Hill - Exclusive Performances, Videos & More
> http://faith.yahoo.com
>
> --
> To unsubscribe, e-mail:

> For additional commands, e-mail:

>


--
To unsubscribe, e-mail:   
For additional commands, e-mail: 




Re: problems with HTML Parser

2002-08-14 Thread Maurits van Wijland

Keith,

I haven't noticed the problem with the Parser...but you trigger me
by saying that you have a PDFParser!!!

Are you able to contribute this PDFParser??

Maurits.
- Original Message -
From: "Keith Gunn" <[EMAIL PROTECTED]>
To: "Lucene Users List" <[EMAIL PROTECTED]>
Sent: Wednesday, August 14, 2002 9:46 AM
Subject: problems with HTML Parser


> Has anyone noticed that the HTML Parser that comes with
> Lucene joins terms together when parsing a file.
> I used to think it was my PDFParser but after fixing that
> I found out it was the HMTLParser.
>
> I managed to find a replacement parser that doesn't join terms.
>
> Just wondered if anyone had come across this problem??
>
>
>
>
> --
> To unsubscribe, e-mail:

> For additional commands, e-mail:

>


--
To unsubscribe, e-mail:   
For additional commands, e-mail: 




Re: Portuguese Analyser

2002-08-11 Thread Maurits van Wijland

Hi Bizu,

please send the source also to the dev group. They might be interested in
including this to Lucene. They already have a German and English version,
why not have your Portuguese version included...

regards,

Maurits
- Original Message -
From: "Bizu de Anúncio" <>
To: "Lucene Users List" <[EMAIL PROTECTED]>
Sent: Thursday, August 08, 2002 7:56 AM
Subject: RES: Portuguese Analyser


> I'm sending the source to your e-mail ...
>
> -Mensagem original-
> De: William W [mailto:[EMAIL PROTECTED]]
> Enviada em: Thursday, August 08, 2002 10:40 AM
> Para: [EMAIL PROTECTED]
> Assunto: Portuguese Analyser
>
>
> Hi All,
>
> Somebody have a Portuguese Analyser ?
> Thanks,
> William.
>
>
> _
> Chat with friends online, try MSN Messenger: http://messenger.msn.com
>
>
> --
> To unsubscribe, e-mail:
> 
> For additional commands, e-mail:
> 
>
>
> --
> To unsubscribe, e-mail:

> For additional commands, e-mail:

>


--
To unsubscribe, e-mail:   
For additional commands, e-mail: 




Re: Portuguese Analyser

2002-06-11 Thread Maurits van Wijland

Hi William and all,

We should clearly look at the snowball initiative. There a re a dozen
Western European stemmers that could easily be included with Lucene. By
adding a language identification mechanism (based on frequency of stop words
for instance), we can build a generic analyser that will use the correct
language specific analyser.

So, william, have a look at Snowball at sourceforge, this has a Portugues
stemmer.

regards,

maurits.
- Original Message -
From: "William W" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, June 11, 2002 12:18 PM
Subject: Portuguese Analyser


>
> Hi All,
>
> Somebody have a Portuguese Analyser ?
> Thanks,
> William.
>
> _
> Chat with friends online, try MSN Messenger: http://messenger.msn.com
>
>
> --
> To unsubscribe, e-mail:

> For additional commands, e-mail:

>


--
To unsubscribe, e-mail:   
For additional commands, e-mail: 




Re: Summarization tool?

2002-05-14 Thread Maurits van Wijland

Well, the main keyword here is freeware...now that's a big NO! But, next
best thing...there is this
dutch company named "carp technologies" (and no, I don't work for them...:)
that has a
summarization tool (also written in Java).

So, the URL is: http://www.carp-technologies.nl/en/home.html

There is a 'free' personal edition...

so, have fun

Maurits


- Original Message -
From: "Nikhil G. Daddikar" <[EMAIL PROTECTED]>
To: "Lucene" <[EMAIL PROTECTED]>
Sent: Monday, May 13, 2002 7:31 PM
Subject: OT: Summarization tool?


> Hello,
>
> This is slightly off-topic but does anyone know of a good freeware
summarization tool i.e something that generates an abstract out
> of a text?
>
> Thanks.
>
>
> --
> To unsubscribe, e-mail:

> For additional commands, e-mail:

>


--
To unsubscribe, e-mail:   
For additional commands, e-mail: