Actually, the problem with 50+ million URLs will be CPU power.
Queries will be slow :( To fix it, we need ASPSeek to work as
a cluster of machines, so index and searchd load will be shared
between several boxes. We have a clear understanding of how this
can be done, and rough time estimates (about 6 months). The result
will be aspseek capable of scaling to 2-200 machines, and capable
of processing the volumes that google handles now.

And now the sad truth. For cluster version to come true, some money
is needed - for hardware, for our salary etc. etc. Estimation is
about $150-$200K. If we will find some sponsorship or donation,
we will implement cluster version. Otherwise, it is only a dream.

Adonis El Fakih wrote:
> 
> Hi,
> 
> thanks to everyone for the 2.4 kernel update. That will make it worth upgrading :)) 
>and if that is the case then I do not see any problems with aspseek getting that 
>high..
> again thanks
> 
> [EMAIL PROTECTED] ���:
> 
> > +-Adonis El Fakih-([EMAIL PROTECTED])-[16.11.01 16:35]:
> > I have been testing it and have close to 4 million and when I looked
> > at the database files created by mysql one of the files in nearing
> > 2Gig which is the maximum file limit in Linux.  So once that file
> > reaches 2gig the whole thing will stop from growing.
> 
> AFAIK the file-limit-size was increased during the 2.4-development
> 
>      Balu
> 
> ______________________________________________________________
> ���� ��� ����� ������ �������     http://registrar.ayna.com/b/
> ��� �� ��� ������ �����                              ���.����

-- 
[EMAIL PROTECTED]  ICQ 7551596  Phone +7 903 6722750
Hard work may not kill you,  but why take chances?
--

Reply via email to