I am a little bit curious: Do you _really_ have 1000 requests/second, or do
you just throw some numbers in? ;)

Sebastian, supposedly_asking_to_get_some_pre_evaluation :)

Even in times, where there is not that much traffix? Automatic backup at
3:00 in the morning for example?

3:00 morning in one country is 9 Am in other country, 3 PM in other country
.

By the way Thank you so much guys, I wanted tidbits and you gave me more.

Stuart, I recall your replies in other situations and always you helped me
to improve.list is happy to have you.

Sincerely
Negin Nickparsa


On Wed, Sep 18, 2013 at 3:39 PM, Sebastian Krebs <krebs....@gmail.com>wrote:

>
>
>
> 2013/9/18 Negin Nickparsa <nickpa...@gmail.com>
>
>> Thank you Camilo
>>
>> to be more in details,suppose the website has 80,000 users and each page
>> takes 200 ms to be rendered and you have thousand hits in a second so we
>> want to reduce the time of rendering. is there any way to reduce the
>> rendering time?
>>
>
> Read about frontend-/proxy-caching (Nginx, Varnish) and ESI/SSI-include
> (also NGinx and Varnish ;)). The idea is simply "If you don't have to
> process on every request in the backend, don't process it in the backend on
> every request".
>
> But maybe you mixed up some words, because the rendering time is the time
> consumed by the renderer within the browser (HTML and CSS). This you can
> improve, if you improve your HTML/CSS :)
>
>
> I am a little bit curious: Do you _really_ have 1000 requests/second, or
> do you just throw some numbers in? ;)
>
>
>>
>> other thing is suppose they want to upload files simultaneously and the
>> videos are in the website not on another server like YouTube and so streams
>> are really consuming the bandwidth.
>>
>
> Well, if there are streams, there are streams. I cannot imagine, that
> there is another way someone can stream a video without downloading it.
>
>
>>
>> Also,It is troublesome to get backups,when getting backups you have
>> problem of lock backing up with bulk of data.
>>
>
> Even in times, where there is not that much traffix? Automatic backup at
> 3:00 in the morning for example?
>
>
>>
>>
>>
>> Sincerely
>> Negin Nickparsa
>>
>>
>> On Wed, Sep 18, 2013 at 12:50 PM, Camilo Sperberg <unrea...@gmail.com>wrote:
>>
>>>
>>> On Sep 18, 2013, at 09:38, Negin Nickparsa <nickpa...@gmail.com> wrote:
>>>
>>> > Thank you Sebastian..actually I will already have one if qualified for
>>> the
>>> > job. Yes, and I may fail to handle it that's why I asked for guidance.
>>> > I wanted some tidbits to start over. I have searched through yslow,
>>> > HTTtrack and others.
>>> > I have searched through php list in my email too before asking this
>>> > question. it is kind of beneficial for all people and not has been
>>> asked
>>> > directly.
>>> >
>>> >
>>> > Sincerely
>>> > Negin Nickparsa
>>> >
>>> >
>>> > On Wed, Sep 18, 2013 at 10:45 AM, Sebastian Krebs <krebs....@gmail.com
>>> >wrote:
>>> >
>>> >>
>>> >>
>>> >>
>>> >> 2013/9/18 Negin Nickparsa <nickpa...@gmail.com>
>>> >>
>>> >>> In general, what are the best ways to handle high traffic websites?
>>> >>>
>>> >>> VPS(clouds)?
>>> >>> web analyzers?
>>> >>> dedicated servers?
>>> >>> distributed memory cache?
>>> >>>
>>> >>
>>> >> Yes :)
>>> >>
>>> >> But seriously: That is a topic most of us spent much time to get into
>>> it.
>>> >> You can explain it with a bunch of buzzwords. Additional, how do you
>>> define
>>> >> "high traffic websites"? Do you already _have_ such a site? Or do you
>>> >> _want_ it? It's important, because I've seen it far too often, that
>>> >> projects spent too much effort in their "high traffic infrastructure"
>>> and
>>> >> at the end it wasn't that high traffic ;) I wont say, that you cannot
>>> be
>>> >> successfull, but you should start with an effort you can handle.
>>> >>
>>> >> Regards,
>>> >> Sebastian
>>> >>
>>> >>
>>> >>>
>>> >>>
>>> >>> Sincerely
>>> >>> Negin Nickparsa
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> github.com/KingCrunch
>>> >>
>>>
>>> Your question is way too vague to be answered properly... My best guess
>>> would be that it depends severely on the type of website you have and how's
>>> the current implementation being well... implemented.
>>>
>>> Simply said: what works for Facebook may/will not work for linkedIn,
>>> twitter or Google, mainly because the type of search differs A LOT:
>>> facebook is about relations between people, twitter is about small pieces
>>> of data not mainly interconnected between each other, while Google is all
>>> about links and all type of content: from little pieces of information
>>> through whole Wikipedia.
>>>
>>> You could start by studying how varnish and redis/memcached works, you
>>> could study about how proxies work (nginx et al), CDNs and that kind of
>>> stuff, but if you want more specific answers, you could better ask specific
>>> question.
>>>
>>> In the PHP area, an opcode cache does the job very well and can
>>> accelerate the page load by several orders of magnitude, I recommend
>>> OPCache, which is already included in PHP 5.5.
>>>
>>> Greetings.
>>>
>>>
>>
>
>
> --
> github.com/KingCrunch
>

Reply via email to