[jira] Commented: (LUCENE-675) Lucene benchmark: objective performance test for Lucene

2006-11-13 Thread Grant Ingersoll (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-675?page=comments#action_12449409 ] 

Grant Ingersoll commented on LUCENE-675:


OK, how about I commit my changes, then you can add a patch that shows your 
ideas?


> Lucene benchmark: objective performance test for Lucene
> ---
>
> Key: LUCENE-675
> URL: http://issues.apache.org/jira/browse/LUCENE-675
> Project: Lucene - Java
>  Issue Type: Improvement
>Reporter: Andrzej Bialecki 
> Assigned To: Grant Ingersoll
> Attachments: benchmark.patch, BenchmarkingIndexer.pm, 
> extract_reuters.plx, LuceneBenchmark.java, LuceneIndexer.java, timedata.zip, 
> tiny.alg, tiny.properties
>
>
> We need an objective way to measure the performance of Lucene, both indexing 
> and querying, on a known corpus. This issue is intended to collect comments 
> and patches implementing a suite of such benchmarking tests.
> Regarding the corpus: one of the widely used and freely available corpora is 
> the original Reuters collection, available from 
> http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.tar.gz 
> or 
> http://people.csail.mit.edu/u/j/jrennie/public_html/20Newsgroups/20news-18828.tar.gz.
>  I propose to use this corpus as a base for benchmarks. The benchmarking 
> suite could automatically retrieve it from known locations, and cache it 
> locally.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [jira] Commented: (LUCENE-532) [PATCH] Indexing on Hadoop distributed file system

2006-11-13 Thread Kevin Oliver
We considered patching this code when we ran into a data consistency
issue bug with our file system. It wasn't too difficult to patch
CompoundFileWriter to output lengths instead of offsets.

Naturally, I can't seem to find my implementation of this, but as I
recall it wasn't too difficult to do. I think it also required creating
a format version number at the beginning of the cfs file so that
CompoundFileReader knew what the data format was. 

Anyways, just want to point out that there was an easy enough workaround
for this that didn't require creating a new set of files. 

-Original Message-
From: Michael McCandless (JIRA) [mailto:[EMAIL PROTECTED] 
Sent: Saturday, November 11, 2006 8:04 AM
To: java-dev@lucene.apache.org
Subject: [jira] Commented: (LUCENE-532) [PATCH] Indexing on Hadoop
distributed file system

[
http://issues.apache.org/jira/browse/LUCENE-532?page=comments#action_124
48989 ] 

Michael McCandless commented on LUCENE-532:
---

Alas, in trying to change the CFS format so that file offsets are stored
at the end of the file, when implementing the corresponding changes to
CompoundFileReader, I discovered that this approach isn't viable.  I had
been thinking the reader would look at the file length, subtract
numEntry*sizeof(long), seek to there, and then read the offsets (longs).
The problem is: we can't know sizeof(long) since this is dependent on
the actual storage implementation, ie, for the same reasoning above.  Ie
we can't assume a byte = 1 file position, always.

So, then, the only solution I can think of (to avoid seek during write)
would be to write to a separate file, for each *.cfs file, that contains
the file offsets corresponding to the cfs file.  Eg, if we have _1.cfs
we would also have _1.cfsx which holds the file offsets.   This is sort
of costly if we care about # files (it doubles the number of files in
the simple case of a bunch of segments w/ no deletes/separate norms).

Yonik had actually mentioned in LUCENE-704 that fixing CFS writing to
not use seek was not very important, ie, it would be OK to not use
compound files with HDFS as the store.

Does anyone see a better approach?

> [PATCH] Indexing on Hadoop distributed file system
> --
>
> Key: LUCENE-532
> URL: http://issues.apache.org/jira/browse/LUCENE-532
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Index
>Affects Versions: 1.9
>Reporter: Igor Bolotin
>Priority: Minor
> Attachments: indexOnDFS.patch, SegmentTermEnum.patch,
TermInfosWriter.patch
>
>
> In my current project we needed a way to create very large Lucene
indexes on Hadoop distributed file system. When we tried to do it
directly on DFS using Nutch FsDirectory class - we immediately found
that indexing fails because DfsIndexOutput.seek() method throws
UnsupportedOperationException. The reason for this behavior is clear -
DFS does not support random updates and so seek() method can't be
supported (at least not easily).
>  
> Well, if we can't support random updates - the question is: do we
really need them? Search in the Lucene code revealed 2 places which call
IndexOutput.seek() method: one is in TermInfosWriter and another one in
CompoundFileWriter. As we weren't planning to use CompoundFileWriter -
the only place that concerned us was in TermInfosWriter.
>  
> TermInfosWriter uses IndexOutput.seek() in its close() method to write
total number of terms in the file back into the beginning of the file.
It was very simple to change file format a little bit and write number
of terms into last 8 bytes of the file instead of writing them into
beginning of file. The only other place that should be fixed in order
for this to work is in SegmentTermEnum constructor - to read this piece
of information at position = file length - 8.
>  
> With this format hack - we were able to use FsDirectory to write index
directly to DFS without any problems. Well - we still don't index
directly to DFS for performance reasons, but at least we can build small
local indexes and merge them into the main index on DFS without copying
big main index back and forth. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-675) Lucene benchmark: objective performance test for Lucene

2006-11-13 Thread Doron Cohen (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-675?page=comments#action_12449419 ] 

Doron Cohen commented on LUCENE-675:


Sounds good.

In this case I will add my stuff under a new package: 
org.apache.lucene.benchmark2. (this package would have no dependencies in 
org.apache.lucene.benchmark.). I will also add tarkets in buid.xml, and add 
.alg, and .alg files under conf.
Makes sense?

Do you already know when you are going to commit it?

> Lucene benchmark: objective performance test for Lucene
> ---
>
> Key: LUCENE-675
> URL: http://issues.apache.org/jira/browse/LUCENE-675
> Project: Lucene - Java
>  Issue Type: Improvement
>Reporter: Andrzej Bialecki 
> Assigned To: Grant Ingersoll
> Attachments: benchmark.patch, BenchmarkingIndexer.pm, 
> extract_reuters.plx, LuceneBenchmark.java, LuceneIndexer.java, timedata.zip, 
> tiny.alg, tiny.properties
>
>
> We need an objective way to measure the performance of Lucene, both indexing 
> and querying, on a known corpus. This issue is intended to collect comments 
> and patches implementing a suite of such benchmarking tests.
> Regarding the corpus: one of the widely used and freely available corpora is 
> the original Reuters collection, available from 
> http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.tar.gz 
> or 
> http://people.csail.mit.edu/u/j/jrennie/public_html/20Newsgroups/20news-18828.tar.gz.
>  I propose to use this corpus as a base for benchmarks. The benchmarking 
> suite could automatically retrieve it from known locations, and cache it 
> locally.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Updated: (LUCENE-532) [PATCH] Indexing on Hadoop distributed file system

2006-11-13 Thread Kevin Oliver (JIRA)
 [ http://issues.apache.org/jira/browse/LUCENE-532?page=all ]

Kevin Oliver updated LUCENE-532:


Attachment: cfs-patch.txt

Here are some diffs on how to remove seeks from CompoundFileWriter (this is 
against an older version of Lucene, 1.4.2 I think, but the general idea is the 
same). There's also a test too. 

> [PATCH] Indexing on Hadoop distributed file system
> --
>
> Key: LUCENE-532
> URL: http://issues.apache.org/jira/browse/LUCENE-532
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Index
>Affects Versions: 1.9
>Reporter: Igor Bolotin
>Priority: Minor
> Attachments: cfs-patch.txt, indexOnDFS.patch, SegmentTermEnum.patch, 
> TermInfosWriter.patch
>
>
> In my current project we needed a way to create very large Lucene indexes on 
> Hadoop distributed file system. When we tried to do it directly on DFS using 
> Nutch FsDirectory class - we immediately found that indexing fails because 
> DfsIndexOutput.seek() method throws UnsupportedOperationException. The reason 
> for this behavior is clear - DFS does not support random updates and so 
> seek() method can't be supported (at least not easily).
>  
> Well, if we can't support random updates - the question is: do we really need 
> them? Search in the Lucene code revealed 2 places which call 
> IndexOutput.seek() method: one is in TermInfosWriter and another one in 
> CompoundFileWriter. As we weren't planning to use CompoundFileWriter - the 
> only place that concerned us was in TermInfosWriter.
>  
> TermInfosWriter uses IndexOutput.seek() in its close() method to write total 
> number of terms in the file back into the beginning of the file. It was very 
> simple to change file format a little bit and write number of terms into last 
> 8 bytes of the file instead of writing them into beginning of file. The only 
> other place that should be fixed in order for this to work is in 
> SegmentTermEnum constructor - to read this piece of information at position = 
> file length - 8.
>  
> With this format hack - we were able to use FsDirectory to write index 
> directly to DFS without any problems. Well - we still don't index directly to 
> DFS for performance reasons, but at least we can build small local indexes 
> and merge them into the main index on DFS without copying big main index back 
> and forth. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-532) [PATCH] Indexing on Hadoop distributed file system

2006-11-13 Thread Michael McCandless (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-532?page=comments#action_12449446 ] 

Michael McCandless commented on LUCENE-532:
---

Thank you for the patch & unit test!

This is actually the same approach that I started with.  But I ruled
it out because I don't think it's safe to do arithmetic (ie, adding
lengths to compute positions) on file positions.

Meaning, one can imagine a Directory implementation that's doing some
kind of compression where on writing N bytes the file position does
not in fact advance by N bytes.  Or maybe an implementation that must
escape certain bytes, or it's writing to XML or using some kind of
alternate coding system, or something along these lines.  I don't know
if such Directory implementations exist today, but, I don't want to
break them if they do nor preclude them in the future.

And so the only value you should ever pass to "seek()" is a value you
previously obtained by calling "getFilePosition()".  The current
javadocs for these methods seem to imply this.

However, on looking into this question further ... I do see that there
are places now where Lucene already does arithmetic on file positions.
For example in accessing a *.fdx file or *.tdx file we assume we can
find a given entry at FORMAT_SIZE + 8 * index file position.

Maybe it is OK to make the definition of Directory.seek() stricter, by
requiring that in fact the position we pass to seek is always the same
as "the number of bytes written", thereby allowing us to do arithmetic
based on bytes/length and call seek with such values?  I'm nervous
about making this API change.

I think this is the open question.  Does anyone have any input to help
answer this question?

Lucene currently makes this assumption, albeit in a fairly contained
way I think (most other calls to seek seem to be values previously
obtained by getFilePosition()).


> [PATCH] Indexing on Hadoop distributed file system
> --
>
> Key: LUCENE-532
> URL: http://issues.apache.org/jira/browse/LUCENE-532
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Index
>Affects Versions: 1.9
>Reporter: Igor Bolotin
>Priority: Minor
> Attachments: cfs-patch.txt, indexOnDFS.patch, SegmentTermEnum.patch, 
> TermInfosWriter.patch
>
>
> In my current project we needed a way to create very large Lucene indexes on 
> Hadoop distributed file system. When we tried to do it directly on DFS using 
> Nutch FsDirectory class - we immediately found that indexing fails because 
> DfsIndexOutput.seek() method throws UnsupportedOperationException. The reason 
> for this behavior is clear - DFS does not support random updates and so 
> seek() method can't be supported (at least not easily).
>  
> Well, if we can't support random updates - the question is: do we really need 
> them? Search in the Lucene code revealed 2 places which call 
> IndexOutput.seek() method: one is in TermInfosWriter and another one in 
> CompoundFileWriter. As we weren't planning to use CompoundFileWriter - the 
> only place that concerned us was in TermInfosWriter.
>  
> TermInfosWriter uses IndexOutput.seek() in its close() method to write total 
> number of terms in the file back into the beginning of the file. It was very 
> simple to change file format a little bit and write number of terms into last 
> 8 bytes of the file instead of writing them into beginning of file. The only 
> other place that should be fixed in order for this to work is in 
> SegmentTermEnum constructor - to read this piece of information at position = 
> file length - 8.
>  
> With this format hack - we were able to use FsDirectory to write index 
> directly to DFS without any problems. Well - we still don't index directly to 
> DFS for performance reasons, but at least we can build small local indexes 
> and merge them into the main index on DFS without copying big main index back 
> and forth. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-675) Lucene benchmark: objective performance test for Lucene

2006-11-13 Thread Grant Ingersoll (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-675?page=comments#action_12449450 ] 

Grant Ingersoll commented on LUCENE-675:


I'm not a big fan of tacking a number on to the end of Java names, as it 
doesn't let you know much about what's in the file or package.  How about 
ConfigurableBenchmarker or PropertyBasedBenchmarker or something along those 
lines, since what you are proposing is a property based one.  I think it can 
just go in the benchmark package or you could make a sub package under there 
that is more descriptive.

I will try to commit tonight or tomorrow morning.

> Lucene benchmark: objective performance test for Lucene
> ---
>
> Key: LUCENE-675
> URL: http://issues.apache.org/jira/browse/LUCENE-675
> Project: Lucene - Java
>  Issue Type: Improvement
>Reporter: Andrzej Bialecki 
> Assigned To: Grant Ingersoll
> Attachments: benchmark.patch, BenchmarkingIndexer.pm, 
> extract_reuters.plx, LuceneBenchmark.java, LuceneIndexer.java, timedata.zip, 
> tiny.alg, tiny.properties
>
>
> We need an objective way to measure the performance of Lucene, both indexing 
> and querying, on a known corpus. This issue is intended to collect comments 
> and patches implementing a suite of such benchmarking tests.
> Regarding the corpus: one of the widely used and freely available corpora is 
> the original Reuters collection, available from 
> http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.tar.gz 
> or 
> http://people.csail.mit.edu/u/j/jrennie/public_html/20Newsgroups/20news-18828.tar.gz.
>  I propose to use this corpus as a base for benchmarks. The benchmarking 
> suite could automatically retrieve it from known locations, and cache it 
> locally.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: ParallelMultiSearcher reimplementation

2006-11-13 Thread Doug Cutting

Chuck Williams wrote:

I followed this same logic in ParallelWriter and got burned.  My first
implementation (still the version submitted as a patch in jira) used
dynamic threads to add the subdocuments to the parallel subindexes
simultaneously.  This hit a problem with abnormal native heap OOM's in
the jvm.  At first I thought it was simply a thread stack size / java
heap size configuration issue, but adjusting these did not resolve the
issue.  This was on linux.  ps -L showed large numbers of defunct
threads.  jconsole showed enormous growing total-ever-allocated thread
counts.  I switched to a thread pool and the issue went away with the
same config settings.


Can you demonstrate the problem with a standalone program?

Way back in the 90's I implemented a system at Excite that spawned one 
or more Java threads per request, and it ran for days on end, handling 
20 or more requests per second.  The thread spawning overhead was 
insignificant.  That was JDK 1.2 on Solaris.  Have things gotten that 
much worse in the interim?  Today Hadoop's RPC allocates a thread per 
connection, and we see good performance.  So I certainly have 
counterexamples.


Doug


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: ParallelMultiSearcher reimplementation

2006-11-13 Thread eks dev
maybe someone interested.
I just remembered, we tested pure Hadop RPC a few (5+) months ago in simple 
setup, kind of balancing server getting and distributing requests to 3 "search 
units"...

we went that far as java RMI proved to have ugly latency problems (or we did 
not get it right, don't know for sure) in well-under-second scenario. 

All in all, spawning new thread (hadoop RPC) on java 1.5, roundtrip over 2 net 
nodes (100MB ) there and back was slightly under 3k / second on a few kB  
message. 


- Original Message 
From: Doug Cutting <[EMAIL PROTECTED]>
To: java-dev@lucene.apache.org
Sent: Monday, 13 November, 2006 9:50:28 PM
Subject: Re: ParallelMultiSearcher reimplementation

Chuck Williams wrote:
> I followed this same logic in ParallelWriter and got burned.  My first
> implementation (still the version submitted as a patch in jira) used
> dynamic threads to add the subdocuments to the parallel subindexes
> simultaneously.  This hit a problem with abnormal native heap OOM's in
> the jvm.  At first I thought it was simply a thread stack size / java
> heap size configuration issue, but adjusting these did not resolve the
> issue.  This was on linux.  ps -L showed large numbers of defunct
> threads.  jconsole showed enormous growing total-ever-allocated thread
> counts.  I switched to a thread pool and the issue went away with the
> same config settings.

Can you demonstrate the problem with a standalone program?

Way back in the 90's I implemented a system at Excite that spawned one 
or more Java threads per request, and it ran for days on end, handling 
20 or more requests per second.  The thread spawning overhead was 
insignificant.  That was JDK 1.2 on Solaris.  Have things gotten that 
much worse in the interim?  Today Hadoop's RPC allocates a thread per 
connection, and we see good performance.  So I certainly have 
counterexamples.

Doug


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





Send instant messages to your online friends http://uk.messenger.yahoo.com

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Hibernate Lucene trademark issues

2006-11-13 Thread Emmanuel Bernard

Hi Lukas,
I'd be happy to answer your question, but I don't think Lucene dev is 
the appropriate area for that kind of discussion.
let's move this discussion here 
http://forum.hibernate.org/viewforum.php?f=9 (or in the Lucene User list 
if you want to).


Emmanuel

Lukas Vlcek wrote:


Hi Emmanuel,

I am interested in you solution. I have a plan to use lucene and hibernate
in my next project and search will play very important role 
(*stake-holder*

functionality). I have heard of
comapssproject which introduces
searching (lucene) layer on top of hibernate also.
I haven't had a change to study it in detail yet.

Do you think you could give me some high level comments about your
motivation for implementing lucene search directly in hibernate code,
couldn't you just use compass project? Is there any fundamental difference
between your approach and comapss?

Many thanks,
Lukas

On 11/6/06, Emmanuel Bernard <[EMAIL PROTECTED]> wrote:
>
> Hi guys,
> I'm Emmanuel Bernard from JBoss.
> I'm the current lead developer of the Hibernate Lucene integration 
module.

> The goal of this project is to facilitate the integration of a search
> capability to Hibernate based applications. And guess what, I use Lucene
> ;-)
>
> 
http://www.hibernate.org/hib_docs/annotations/reference/en/html/lucene.html
> 
http://www.mail-archive.com/hibernate-dev%40lists.jboss.org/msg00392.html

>
> I realized this week end that the 'Hibernate Lucene' name might infringe
> the Apache Lucene trademark policy.
> http://www.cafepress.com/lucene/ seems to state that Lucene is a
> trademark of the Apache Software Foundation (nice golf shirt BTW)
> But I wasn't able to find any document explaining the fair use of the
> Lucene brand (the license as well as the notice seem to be silent on
> this subject).
>
> Even if Lucene in not trademarked, what do you consider a fair use of
> your brand? I'm happy to rename my project, I guess the initial choice
> was more a tribute to your project than anything else.
>
> Emmanuel
>
> PS: please forward this email to the appropriate persons if this is not
> the case already (PMC or whatever)
>
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: ParallelMultiSearcher reimplementation

2006-11-13 Thread Chuck Williams


Doug Cutting wrote on 11/13/2006 10:50 AM:
> Chuck Williams wrote:
>> I followed this same logic in ParallelWriter and got burned.  My first
>> implementation (still the version submitted as a patch in jira) used
>> dynamic threads to add the subdocuments to the parallel subindexes
>> simultaneously.  This hit a problem with abnormal native heap OOM's in
>> the jvm.  At first I thought it was simply a thread stack size / java
>> heap size configuration issue, but adjusting these did not resolve the
>> issue.  This was on linux.  ps -L showed large numbers of defunct
>> threads.  jconsole showed enormous growing total-ever-allocated thread
>> counts.  I switched to a thread pool and the issue went away with the
>> same config settings.
>
> Can you demonstrate the problem with a standalone program?
>
> Way back in the 90's I implemented a system at Excite that spawned one
> or more Java threads per request, and it ran for days on end, handling
> 20 or more requests per second.  The thread spawning overhead was
> insignificant.  That was JDK 1.2 on Solaris.  Have things gotten that
> much worse in the interim?  Today Hadoop's RPC allocates a thread per
> connection, and we see good performance.  So I certainly have
> counterexamples.

Are you pushing memory to the limit?  In my case, we need a maximally
sized Java heap (about 2.5G on linux) and so carefully minimize the
thread stack and perm space sizes.  My suspicion is that it takes a
while after a thread is defunct before all resources are reclaimed.  We
are hitting our server with 50 simultaneous threads doing indexing, each
of which writes 6 parallel subindexes in a separate thread.  This yields
hundreds of threads created per second in tight total thread stack
space; the process continually bumped over the native heap limit.  With
the change to thread pools, and therefore no dynamic creation and
destruction of thread stacks, all works fine.

Unless you are running with a maximal Java heap, you are unlikely to
have the issue as there is plenty of space left over for the native
heap, so a delay in thread stack reclamation would yield a larger
average process size, but would not cause OOM's.

Chuck



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]