[symfony-users] Re: Symfony Production Performance improvements

2009-03-11 Thread saad

Hi,
I was at a php conference last week, and I attended a really good
presentation from Ilia Alshanetsky:
http://ilia.ws/files/phpquebec_2009.pdf
It is NOT symfony specific, but you can find really good clue to
optimize your php web app:
To summarize:
- optimize but don't touch the code (as less as possible)
- check your DB (indexes), Vast majority of applications have the
bottleneck in the database not the code!
- use opCode cache (like APC, PHP accelerator, ...)
- use in-memory caches (and put your session in memcache instead of
standard file system)
- use ondemand caching (pages and/or SQL results)
- disribution binaries are not optimized for your server: compile
Apache/PHP/DB from source
- use Xdebug and xcachegrind to profile your app
- ...

Thanks

On Mar 10, 1:54 am, Gareth McCumskey  wrote:
> I have actually come across a rather interesting way to use the memory cache
> as specified in the book 
> (http://www.symfony-project.org/book/1_1/18-Performance#chapter_18_sub...
> )
>
> The way it is described to store data in the cache needs three things; a
> unique name that can be easily determined, the value related to that name
> and how long that value stays cached. In our application, our database
> records will very very rarely see any updates. Once records are inserted
> they are only ever retrieved making them ideal for caching. But we cannot
> store every database record into memory. We do have queries that run very
> frequently, however, and each Criteria object i sunique for each query with
> the various values and so on. The problem is you cannot store an object,
> like the Criteria object that is built up before you run a Propel query with
> doSelect or similar methods. So, if you serialize the Criteria object, you
> have a string. But this is a very long string (one of the serialized
> Criteria objects I tested with was over 1000 characters long). But you can
> convert it to a hash value. Naturally there is one problem with a hash; the
> possibility that two different string would create the same hash. So create
> two hashes and concatenate them, an SHA1 hash and an MD5 hash. This can then
> be the name of the item you are storing in memory. Run the DB query and you
> have the value of the query, which can then also be stored into the cache.
>
> As a brief example for what I mean by overriding the doSelect method for a
> model class:
>
> private static function doSelect($c)
> {
>    $serialized_c = serialize($c);
>    $md5_serialized = md5($serialized_c);
>    $sha1_serialized = sha1($serialized_c);
>
>    $cache = new sfAPCCache();
>
>    if ($cache->has('doSelect'.$md5_serialized.$sha1_serialized))
>    {
>       $query_value =
> $cache->get('doSelect'.$md5_serialized.$sha1_serialized);
>    }
>    else
>    {
>       $query_value = parent::doSelect($c);
>       $cache->set('doSelect'.$md5_serialized.$sha1_serialized, $query_value,
> 60);
>    }
>
>    return $query_value;
>
> }
>
> Any comments would be great on this or any problems with doing this kind of
> thing that I may not have seen please feel free to let me know
>
> On Tue, Mar 10, 2009 at 8:06 AM, Gareth McCumskey wrote:
>
> > Thanks for that. I have actually been looking at the function cache amongst
> > others and there is a lot we can do there as our DB records, once inserted,
> > are not likely to change. In fact if they do it means we are having a
> > problem as we store email data for a number of companies in them. Therefore
> > function caching and even memory caching records as we extract them from db
> > would probably help us a lot. It does mean more work code-wise and isn't a
> > "quick-fix", so we plan to start looking at this once we hit Beta where
> > performance will be a major requirement.
>
> > The old system is faster simply because it follows no design pattern except
> > procedural and that is where its speed lies. There are no ORM's, classes or
> > anything like that, and SQL queries are sent straight through to the
> > database using handcoded, dynamic SQL queries as opposed to an ORM generated
> > one and the resultsets are manipulated directly in each "view". In fact
> > there are only views, there is little seperation of business logic and
> > presentation.
>
> > The reason we need symfony for this new version is that we are going to be
> > adding more advanced features that would "complicate" the product beyond
> > what a procedural style would allow us to maintain. We are already
> > struggling to keep the older system maintained and enhanced for our
> > customers as it is. symfony, Propel and even Prototype with scriptaculous
> > help alleviate these maintenance and extensibility issues.
>
> > On Mon, Mar 9, 2009 at 9:13 PM, Richtermeister  wrote:
>
> >> Hi Gareth,
>
> >> after reading all this I feel your time is most likely best spent in
> >> smart caching, since it sounds like the DB is not your bottleneck.
> >> What's easy to overlook when working with symfony, is that compared to
> >> straight pro

[symfony-users] Re: Symfony Production Performance improvements

2009-03-10 Thread Richtermeister

Hi Gareth,

that sounds like a nifty way you came up with, and it is indeed often
used, even within javascript to cache the result of ajax calls by
their url hash.
What I was talking about earlier regarding high-level caching, is that
if you apply the caching to the view layer, just saving the html using
symfony's building caching configuration, those queries would also
only run once, and you would avoid complicating your code.. Does that
make sense?

Daniel


On Mar 9, 11:54 pm, Gareth McCumskey  wrote:
> I have actually come across a rather interesting way to use the memory cache
> as specified in the book 
> (http://www.symfony-project.org/book/1_1/18-Performance#chapter_18_sub...
> )
>
> The way it is described to store data in the cache needs three things; a
> unique name that can be easily determined, the value related to that name
> and how long that value stays cached. In our application, our database
> records will very very rarely see any updates. Once records are inserted
> they are only ever retrieved making them ideal for caching. But we cannot
> store every database record into memory. We do have queries that run very
> frequently, however, and each Criteria object i sunique for each query with
> the various values and so on. The problem is you cannot store an object,
> like the Criteria object that is built up before you run a Propel query with
> doSelect or similar methods. So, if you serialize the Criteria object, you
> have a string. But this is a very long string (one of the serialized
> Criteria objects I tested with was over 1000 characters long). But you can
> convert it to a hash value. Naturally there is one problem with a hash; the
> possibility that two different string would create the same hash. So create
> two hashes and concatenate them, an SHA1 hash and an MD5 hash. This can then
> be the name of the item you are storing in memory. Run the DB query and you
> have the value of the query, which can then also be stored into the cache.
>
> As a brief example for what I mean by overriding the doSelect method for a
> model class:
>
> private static function doSelect($c)
> {
>    $serialized_c = serialize($c);
>    $md5_serialized = md5($serialized_c);
>    $sha1_serialized = sha1($serialized_c);
>
>    $cache = new sfAPCCache();
>
>    if ($cache->has('doSelect'.$md5_serialized.$sha1_serialized))
>    {
>       $query_value =
> $cache->get('doSelect'.$md5_serialized.$sha1_serialized);
>    }
>    else
>    {
>       $query_value = parent::doSelect($c);
>       $cache->set('doSelect'.$md5_serialized.$sha1_serialized, $query_value,
> 60);
>    }
>
>    return $query_value;
>
> }
>
> Any comments would be great on this or any problems with doing this kind of
> thing that I may not have seen please feel free to let me know
>
> On Tue, Mar 10, 2009 at 8:06 AM, Gareth McCumskey wrote:
>
> > Thanks for that. I have actually been looking at the function cache amongst
> > others and there is a lot we can do there as our DB records, once inserted,
> > are not likely to change. In fact if they do it means we are having a
> > problem as we store email data for a number of companies in them. Therefore
> > function caching and even memory caching records as we extract them from db
> > would probably help us a lot. It does mean more work code-wise and isn't a
> > "quick-fix", so we plan to start looking at this once we hit Beta where
> > performance will be a major requirement.
>
> > The old system is faster simply because it follows no design pattern except
> > procedural and that is where its speed lies. There are no ORM's, classes or
> > anything like that, and SQL queries are sent straight through to the
> > database using handcoded, dynamic SQL queries as opposed to an ORM generated
> > one and the resultsets are manipulated directly in each "view". In fact
> > there are only views, there is little seperation of business logic and
> > presentation.
>
> > The reason we need symfony for this new version is that we are going to be
> > adding more advanced features that would "complicate" the product beyond
> > what a procedural style would allow us to maintain. We are already
> > struggling to keep the older system maintained and enhanced for our
> > customers as it is. symfony, Propel and even Prototype with scriptaculous
> > help alleviate these maintenance and extensibility issues.
>
> > On Mon, Mar 9, 2009 at 9:13 PM, Richtermeister  wrote:
>
> >> Hi Gareth,
>
> >> after reading all this I feel your time is most likely best spent in
> >> smart caching, since it sounds like the DB is not your bottleneck.
> >> What's easy to overlook when working with symfony, is that compared to
> >> straight procedural "get data -> display data" scripts, rendering
> >> templates with hydrated objects is slower, albeit more flexible. So,
> >> if your previous site was coded in a bad way, it was probably using a
> >> lot of "view specific" code, so it's hard to compete with that on pure
> >> speed considerat

[symfony-users] Re: Symfony Production Performance improvements

2009-03-10 Thread Gareth McCumskey
Absolutely. Right now we have a number of queries that run on each page
because we have a database that stores many clients data with certain flags
in the tables set for only a specific clients views. These queries will be
my first focus to cache because of the fact that they run on each page load
in order to verify that the data the user is viewing is theirs alone.

At the moment though this new product isn't live and is not being interacted
with by customers, only by developers. Once we open the product for Beta use
by clients then we can look at doing that logging which I think will help us
immensely. One thing we have learnt from the old system when we made changes
is that what you think customers use a lot is not necessari
On Tue, Mar 10, 2009 at 11:53 AM, Fabrice B wrote:

>
> > Naturally there is one problem with a hash; the
> > possibility that two different string would create the same hash. So
> create
> > two hashes and concatenate them, an SHA1 hash and an MD5 hash.
>
> By definition a hash is supposed to statistically avoid this problem.
> But well, it doesn't cost you much time to concatenate your two hashes
> if you don't trust md5 enough :-)
>
> >As a brief example for what I mean by overriding the doSelect method for a
> > model class
>
> Your idea is quite simple and should work but be aware that it could
> also actually make the application slower ! Cache is interesting only
> if you will use (statistically) the same query more than twice in the
> caching interval. So you must evaluate the number of different queries
> you have, and the number of times they are used in an hour.
>
> If you are sure you only have 100 different queries in your whole site
> and your pages are hit 10.000 times per hour, then it's worth it. If
> you have 10.000 different queries (because the cirteria contains a time
> () variable for example) and your pages are hit 1000 times per hour,
> then you are actually slowing down the whole site by caching unique
> queries.
>
> I would recommend to do a very quick study to identifiy the cacheable
> bottlenecks. Before caching your doSelect for example, simply add a
> small logging feature that you enable for a day and then count how
> many different queries you had and how many times each came up. Maybe
> one query represents half of your db requests and you don't even need
> to cache the others :-)
>
> Good luck !
>
> Fabrice Bernhard
> --
> http://www.theodo.fr
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-10 Thread Fabrice B

> Naturally there is one problem with a hash; the
> possibility that two different string would create the same hash. So create
> two hashes and concatenate them, an SHA1 hash and an MD5 hash.

By definition a hash is supposed to statistically avoid this problem.
But well, it doesn't cost you much time to concatenate your two hashes
if you don't trust md5 enough :-)

>As a brief example for what I mean by overriding the doSelect method for a
> model class

Your idea is quite simple and should work but be aware that it could
also actually make the application slower ! Cache is interesting only
if you will use (statistically) the same query more than twice in the
caching interval. So you must evaluate the number of different queries
you have, and the number of times they are used in an hour.

If you are sure you only have 100 different queries in your whole site
and your pages are hit 10.000 times per hour, then it's worth it. If
you have 10.000 different queries (because the cirteria contains a time
() variable for example) and your pages are hit 1000 times per hour,
then you are actually slowing down the whole site by caching unique
queries.

I would recommend to do a very quick study to identifiy the cacheable
bottlenecks. Before caching your doSelect for example, simply add a
small logging feature that you enable for a day and then count how
many different queries you had and how many times each came up. Maybe
one query represents half of your db requests and you don't even need
to cache the others :-)

Good luck !

Fabrice Bernhard
--
http://www.theodo.fr
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-10 Thread Gareth McCumskey
The reason for this is because we may process the same Criteria object
repeatedly across different modules, actions and views

On Tue, Mar 10, 2009 at 11:21 AM, Crafty_Shadow  wrote:

>
> I cannot understand why you prefer to cache model objects.
> A much better option in my opinion is to cache the rendered view for
> the site.
> This way you skip any code that uses the model data to generate the
> presentation, and is the fastest option. If you cannot cache the
> entire action, cache partials or template fragments.
>
> Then you could make use of the object's save() method to clear the
> cache. Even better, notify an event and let a dedicated class take
> care of the cache clearing. There was a good blog post about this a
> week or so ago:
>
> http://www.symfony-project.org/blog/2009/02/21/using-the-symfony-event-system
>
> On Mar 10, 8:54 am, Gareth McCumskey  wrote:
> > I have actually come across a rather interesting way to use the memory
> cache
> > as specified in the book (
> http://www.symfony-project.org/book/1_1/18-Performance#chapter_18_sub...
> > )
> >
> > The way it is described to store data in the cache needs three things; a
> > unique name that can be easily determined, the value related to that name
> > and how long that value stays cached. In our application, our database
> > records will very very rarely see any updates. Once records are inserted
> > they are only ever retrieved making them ideal for caching. But we cannot
> > store every database record into memory. We do have queries that run very
> > frequently, however, and each Criteria object i sunique for each query
> with
> > the various values and so on. The problem is you cannot store an object,
> > like the Criteria object that is built up before you run a Propel query
> with
> > doSelect or similar methods. So, if you serialize the Criteria object,
> you
> > have a string. But this is a very long string (one of the serialized
> > Criteria objects I tested with was over 1000 characters long). But you
> can
> > convert it to a hash value. Naturally there is one problem with a hash;
> the
> > possibility that two different string would create the same hash. So
> create
> > two hashes and concatenate them, an SHA1 hash and an MD5 hash. This can
> then
> > be the name of the item you are storing in memory. Run the DB query and
> you
> > have the value of the query, which can then also be stored into the
> cache.
> >
> > As a brief example for what I mean by overriding the doSelect method for
> a
> > model class:
> >
> > private static function doSelect($c)
> > {
> >$serialized_c = serialize($c);
> >$md5_serialized = md5($serialized_c);
> >$sha1_serialized = sha1($serialized_c);
> >
> >$cache = new sfAPCCache();
> >
> >if ($cache->has('doSelect'.$md5_serialized.$sha1_serialized))
> >{
> >   $query_value =
> > $cache->get('doSelect'.$md5_serialized.$sha1_serialized);
> >}
> >else
> >{
> >   $query_value = parent::doSelect($c);
> >   $cache->set('doSelect'.$md5_serialized.$sha1_serialized,
> $query_value,
> > 60);
> >}
> >
> >return $query_value;
> >
> > }
> >
> > Any comments would be great on this or any problems with doing this kind
> of
> > thing that I may not have seen please feel free to let me know
> >
> > On Tue, Mar 10, 2009 at 8:06 AM, Gareth McCumskey  >wrote:
> >
> > > Thanks for that. I have actually been looking at the function cache
> amongst
> > > others and there is a lot we can do there as our DB records, once
> inserted,
> > > are not likely to change. In fact if they do it means we are having a
> > > problem as we store email data for a number of companies in them.
> Therefore
> > > function caching and even memory caching records as we extract them
> from db
> > > would probably help us a lot. It does mean more work code-wise and
> isn't a
> > > "quick-fix", so we plan to start looking at this once we hit Beta where
> > > performance will be a major requirement.
> >
> > > The old system is faster simply because it follows no design pattern
> except
> > > procedural and that is where its speed lies. There are no ORM's,
> classes or
> > > anything like that, and SQL queries are sent straight through to the
> > > database using handcoded, dynamic SQL queries as opposed to an ORM
> generated
> > > one and the resultsets are manipulated directly in each "view". In fact
> > > there are only views, there is little seperation of business logic and
> > > presentation.
> >
> > > The reason we need symfony for this new version is that we are going to
> be
> > > adding more advanced features that would "complicate" the product
> beyond
> > > what a procedural style would allow us to maintain. We are already
> > > struggling to keep the older system maintained and enhanced for our
> > > customers as it is. symfony, Propel and even Prototype with
> scriptaculous
> > > help alleviate these maintenance and extensibility issues.
> >
> > > On Mon, Mar 9, 2009 at 9:13 PM, Rich

[symfony-users] Re: Symfony Production Performance improvements

2009-03-10 Thread Crafty_Shadow

I cannot understand why you prefer to cache model objects.
A much better option in my opinion is to cache the rendered view for
the site.
This way you skip any code that uses the model data to generate the
presentation, and is the fastest option. If you cannot cache the
entire action, cache partials or template fragments.

Then you could make use of the object's save() method to clear the
cache. Even better, notify an event and let a dedicated class take
care of the cache clearing. There was a good blog post about this a
week or so ago:
http://www.symfony-project.org/blog/2009/02/21/using-the-symfony-event-system

On Mar 10, 8:54 am, Gareth McCumskey  wrote:
> I have actually come across a rather interesting way to use the memory cache
> as specified in the book 
> (http://www.symfony-project.org/book/1_1/18-Performance#chapter_18_sub...
> )
>
> The way it is described to store data in the cache needs three things; a
> unique name that can be easily determined, the value related to that name
> and how long that value stays cached. In our application, our database
> records will very very rarely see any updates. Once records are inserted
> they are only ever retrieved making them ideal for caching. But we cannot
> store every database record into memory. We do have queries that run very
> frequently, however, and each Criteria object i sunique for each query with
> the various values and so on. The problem is you cannot store an object,
> like the Criteria object that is built up before you run a Propel query with
> doSelect or similar methods. So, if you serialize the Criteria object, you
> have a string. But this is a very long string (one of the serialized
> Criteria objects I tested with was over 1000 characters long). But you can
> convert it to a hash value. Naturally there is one problem with a hash; the
> possibility that two different string would create the same hash. So create
> two hashes and concatenate them, an SHA1 hash and an MD5 hash. This can then
> be the name of the item you are storing in memory. Run the DB query and you
> have the value of the query, which can then also be stored into the cache.
>
> As a brief example for what I mean by overriding the doSelect method for a
> model class:
>
> private static function doSelect($c)
> {
>    $serialized_c = serialize($c);
>    $md5_serialized = md5($serialized_c);
>    $sha1_serialized = sha1($serialized_c);
>
>    $cache = new sfAPCCache();
>
>    if ($cache->has('doSelect'.$md5_serialized.$sha1_serialized))
>    {
>       $query_value =
> $cache->get('doSelect'.$md5_serialized.$sha1_serialized);
>    }
>    else
>    {
>       $query_value = parent::doSelect($c);
>       $cache->set('doSelect'.$md5_serialized.$sha1_serialized, $query_value,
> 60);
>    }
>
>    return $query_value;
>
> }
>
> Any comments would be great on this or any problems with doing this kind of
> thing that I may not have seen please feel free to let me know
>
> On Tue, Mar 10, 2009 at 8:06 AM, Gareth McCumskey wrote:
>
> > Thanks for that. I have actually been looking at the function cache amongst
> > others and there is a lot we can do there as our DB records, once inserted,
> > are not likely to change. In fact if they do it means we are having a
> > problem as we store email data for a number of companies in them. Therefore
> > function caching and even memory caching records as we extract them from db
> > would probably help us a lot. It does mean more work code-wise and isn't a
> > "quick-fix", so we plan to start looking at this once we hit Beta where
> > performance will be a major requirement.
>
> > The old system is faster simply because it follows no design pattern except
> > procedural and that is where its speed lies. There are no ORM's, classes or
> > anything like that, and SQL queries are sent straight through to the
> > database using handcoded, dynamic SQL queries as opposed to an ORM generated
> > one and the resultsets are manipulated directly in each "view". In fact
> > there are only views, there is little seperation of business logic and
> > presentation.
>
> > The reason we need symfony for this new version is that we are going to be
> > adding more advanced features that would "complicate" the product beyond
> > what a procedural style would allow us to maintain. We are already
> > struggling to keep the older system maintained and enhanced for our
> > customers as it is. symfony, Propel and even Prototype with scriptaculous
> > help alleviate these maintenance and extensibility issues.
>
> > On Mon, Mar 9, 2009 at 9:13 PM, Richtermeister  wrote:
>
> >> Hi Gareth,
>
> >> after reading all this I feel your time is most likely best spent in
> >> smart caching, since it sounds like the DB is not your bottleneck.
> >> What's easy to overlook when working with symfony, is that compared to
> >> straight procedural "get data -> display data" scripts, rendering
> >> templates with hydrated objects is slower, albeit more flexible. So,
> >> if your pr

[symfony-users] Re: Symfony Production Performance improvements

2009-03-10 Thread Gareth McCumskey
It is unfortuantely a little late in the game for us to switch ORM's now,
otherwise I may have.

On Tue, Mar 10, 2009 at 9:15 AM, Jeremy Benoist wrote:

> I didn't know very well propel.But I know that Doctrine handle natively a
> cache system that is easy to use
> http://www.ullright.org/ullWiki/show/docid/64
>
> And this cache could be use with different system (like apc, memcache,
> etc..)
> http://trac.doctrine-project.org/browser/trunk/lib/Doctrine/ORM/Cache
>
> If Propel doesn't handle cache maybe you can check how doctrine handle it
> to try to do the same, like your little snippet.
>
> J.
>
>
> On Tue, Mar 10, 2009 at 7:54 AM, Gareth McCumskey wrote:
>
>> I have actually come across a rather interesting way to use the memory
>> cache as specified in the book (
>> http://www.symfony-project.org/book/1_1/18-Performance#chapter_18_sub_caching_data_in_the_server
>> )
>>
>> The way it is described to store data in the cache needs three things; a
>> unique name that can be easily determined, the value related to that name
>> and how long that value stays cached. In our application, our database
>> records will very very rarely see any updates. Once records are inserted
>> they are only ever retrieved making them ideal for caching. But we cannot
>> store every database record into memory. We do have queries that run very
>> frequently, however, and each Criteria object i sunique for each query with
>> the various values and so on. The problem is you cannot store an object,
>> like the Criteria object that is built up before you run a Propel query with
>> doSelect or similar methods. So, if you serialize the Criteria object, you
>> have a string. But this is a very long string (one of the serialized
>> Criteria objects I tested with was over 1000 characters long). But you can
>> convert it to a hash value. Naturally there is one problem with a hash; the
>> possibility that two different string would create the same hash. So create
>> two hashes and concatenate them, an SHA1 hash and an MD5 hash. This can then
>> be the name of the item you are storing in memory. Run the DB query and you
>> have the value of the query, which can then also be stored into the cache.
>>
>> As a brief example for what I mean by overriding the doSelect method for a
>> model class:
>>
>> private static function doSelect($c)
>> {
>>$serialized_c = serialize($c);
>>$md5_serialized = md5($serialized_c);
>>$sha1_serialized = sha1($serialized_c);
>>
>>$cache = new sfAPCCache();
>>
>>if ($cache->has('doSelect'.$md5_serialized.$sha1_serialized))
>>{
>>   $query_value =
>> $cache->get('doSelect'.$md5_serialized.$sha1_serialized);
>>}
>>else
>>{
>>   $query_value = parent::doSelect($c);
>>   $cache->set('doSelect'.$md5_serialized.$sha1_serialized,
>> $query_value, 60);
>>}
>>
>>return $query_value;
>> }
>>
>> Any comments would be great on this or any problems with doing this kind
>> of thing that I may not have seen please feel free to let me know
>>
>>
>> On Tue, Mar 10, 2009 at 8:06 AM, Gareth McCumskey 
>> wrote:
>>
>>> Thanks for that. I have actually been looking at the function cache
>>> amongst others and there is a lot we can do there as our DB records, once
>>> inserted, are not likely to change. In fact if they do it means we are
>>> having a problem as we store email data for a number of companies in them.
>>> Therefore function caching and even memory caching records as we extract
>>> them from db would probably help us a lot. It does mean more work code-wise
>>> and isn't a "quick-fix", so we plan to start looking at this once we hit
>>> Beta where performance will be a major requirement.
>>>
>>> The old system is faster simply because it follows no design pattern
>>> except procedural and that is where its speed lies. There are no ORM's,
>>> classes or anything like that, and SQL queries are sent straight through to
>>> the database using handcoded, dynamic SQL queries as opposed to an ORM
>>> generated one and the resultsets are manipulated directly in each "view". In
>>> fact there are only views, there is little seperation of business logic and
>>> presentation.
>>>
>>> The reason we need symfony for this new version is that we are going to
>>> be adding more advanced features that would "complicate" the product beyond
>>> what a procedural style would allow us to maintain. We are already
>>> struggling to keep the older system maintained and enhanced for our
>>> customers as it is. symfony, Propel and even Prototype with scriptaculous
>>> help alleviate these maintenance and extensibility issues.
>>>
>>>
>>> On Mon, Mar 9, 2009 at 9:13 PM, Richtermeister  wrote:
>>>

 Hi Gareth,

 after reading all this I feel your time is most likely best spent in
 smart caching, since it sounds like the DB is not your bottleneck.
 What's easy to overlook when working with symfony, is that compared to
 straight procedural "get data -> display data" scri

[symfony-users] Re: Symfony Production Performance improvements

2009-03-10 Thread Jeremy Benoist
I didn't know very well propel.But I know that Doctrine handle natively a
cache system that is easy to use
http://www.ullright.org/ullWiki/show/docid/64

And this cache could be use with different system (like apc, memcache,
etc..) http://trac.doctrine-project.org/browser/trunk/lib/Doctrine/ORM/Cache

If Propel doesn't handle cache maybe you can check how doctrine handle it to
try to do the same, like your little snippet.

J.


On Tue, Mar 10, 2009 at 7:54 AM, Gareth McCumskey wrote:

> I have actually come across a rather interesting way to use the memory
> cache as specified in the book (
> http://www.symfony-project.org/book/1_1/18-Performance#chapter_18_sub_caching_data_in_the_server
> )
>
> The way it is described to store data in the cache needs three things; a
> unique name that can be easily determined, the value related to that name
> and how long that value stays cached. In our application, our database
> records will very very rarely see any updates. Once records are inserted
> they are only ever retrieved making them ideal for caching. But we cannot
> store every database record into memory. We do have queries that run very
> frequently, however, and each Criteria object i sunique for each query with
> the various values and so on. The problem is you cannot store an object,
> like the Criteria object that is built up before you run a Propel query with
> doSelect or similar methods. So, if you serialize the Criteria object, you
> have a string. But this is a very long string (one of the serialized
> Criteria objects I tested with was over 1000 characters long). But you can
> convert it to a hash value. Naturally there is one problem with a hash; the
> possibility that two different string would create the same hash. So create
> two hashes and concatenate them, an SHA1 hash and an MD5 hash. This can then
> be the name of the item you are storing in memory. Run the DB query and you
> have the value of the query, which can then also be stored into the cache.
>
> As a brief example for what I mean by overriding the doSelect method for a
> model class:
>
> private static function doSelect($c)
> {
>$serialized_c = serialize($c);
>$md5_serialized = md5($serialized_c);
>$sha1_serialized = sha1($serialized_c);
>
>$cache = new sfAPCCache();
>
>if ($cache->has('doSelect'.$md5_serialized.$sha1_serialized))
>{
>   $query_value =
> $cache->get('doSelect'.$md5_serialized.$sha1_serialized);
>}
>else
>{
>   $query_value = parent::doSelect($c);
>   $cache->set('doSelect'.$md5_serialized.$sha1_serialized,
> $query_value, 60);
>}
>
>return $query_value;
> }
>
> Any comments would be great on this or any problems with doing this kind of
> thing that I may not have seen please feel free to let me know
>
>
> On Tue, Mar 10, 2009 at 8:06 AM, Gareth McCumskey wrote:
>
>> Thanks for that. I have actually been looking at the function cache
>> amongst others and there is a lot we can do there as our DB records, once
>> inserted, are not likely to change. In fact if they do it means we are
>> having a problem as we store email data for a number of companies in them.
>> Therefore function caching and even memory caching records as we extract
>> them from db would probably help us a lot. It does mean more work code-wise
>> and isn't a "quick-fix", so we plan to start looking at this once we hit
>> Beta where performance will be a major requirement.
>>
>> The old system is faster simply because it follows no design pattern
>> except procedural and that is where its speed lies. There are no ORM's,
>> classes or anything like that, and SQL queries are sent straight through to
>> the database using handcoded, dynamic SQL queries as opposed to an ORM
>> generated one and the resultsets are manipulated directly in each "view". In
>> fact there are only views, there is little seperation of business logic and
>> presentation.
>>
>> The reason we need symfony for this new version is that we are going to be
>> adding more advanced features that would "complicate" the product beyond
>> what a procedural style would allow us to maintain. We are already
>> struggling to keep the older system maintained and enhanced for our
>> customers as it is. symfony, Propel and even Prototype with scriptaculous
>> help alleviate these maintenance and extensibility issues.
>>
>>
>> On Mon, Mar 9, 2009 at 9:13 PM, Richtermeister  wrote:
>>
>>>
>>> Hi Gareth,
>>>
>>> after reading all this I feel your time is most likely best spent in
>>> smart caching, since it sounds like the DB is not your bottleneck.
>>> What's easy to overlook when working with symfony, is that compared to
>>> straight procedural "get data -> display data" scripts, rendering
>>> templates with hydrated objects is slower, albeit more flexible. So,
>>> if your previous site was coded in a bad way, it was probably using a
>>> lot of "view specific" code, so it's hard to compete with that on pure
>>> speed considerations. The only 

[symfony-users] Re: Symfony Production Performance improvements

2009-03-09 Thread Gareth McCumskey
I have actually come across a rather interesting way to use the memory cache
as specified in the book (
http://www.symfony-project.org/book/1_1/18-Performance#chapter_18_sub_caching_data_in_the_server
)

The way it is described to store data in the cache needs three things; a
unique name that can be easily determined, the value related to that name
and how long that value stays cached. In our application, our database
records will very very rarely see any updates. Once records are inserted
they are only ever retrieved making them ideal for caching. But we cannot
store every database record into memory. We do have queries that run very
frequently, however, and each Criteria object i sunique for each query with
the various values and so on. The problem is you cannot store an object,
like the Criteria object that is built up before you run a Propel query with
doSelect or similar methods. So, if you serialize the Criteria object, you
have a string. But this is a very long string (one of the serialized
Criteria objects I tested with was over 1000 characters long). But you can
convert it to a hash value. Naturally there is one problem with a hash; the
possibility that two different string would create the same hash. So create
two hashes and concatenate them, an SHA1 hash and an MD5 hash. This can then
be the name of the item you are storing in memory. Run the DB query and you
have the value of the query, which can then also be stored into the cache.

As a brief example for what I mean by overriding the doSelect method for a
model class:

private static function doSelect($c)
{
   $serialized_c = serialize($c);
   $md5_serialized = md5($serialized_c);
   $sha1_serialized = sha1($serialized_c);

   $cache = new sfAPCCache();

   if ($cache->has('doSelect'.$md5_serialized.$sha1_serialized))
   {
  $query_value =
$cache->get('doSelect'.$md5_serialized.$sha1_serialized);
   }
   else
   {
  $query_value = parent::doSelect($c);
  $cache->set('doSelect'.$md5_serialized.$sha1_serialized, $query_value,
60);
   }

   return $query_value;
}

Any comments would be great on this or any problems with doing this kind of
thing that I may not have seen please feel free to let me know

On Tue, Mar 10, 2009 at 8:06 AM, Gareth McCumskey wrote:

> Thanks for that. I have actually been looking at the function cache amongst
> others and there is a lot we can do there as our DB records, once inserted,
> are not likely to change. In fact if they do it means we are having a
> problem as we store email data for a number of companies in them. Therefore
> function caching and even memory caching records as we extract them from db
> would probably help us a lot. It does mean more work code-wise and isn't a
> "quick-fix", so we plan to start looking at this once we hit Beta where
> performance will be a major requirement.
>
> The old system is faster simply because it follows no design pattern except
> procedural and that is where its speed lies. There are no ORM's, classes or
> anything like that, and SQL queries are sent straight through to the
> database using handcoded, dynamic SQL queries as opposed to an ORM generated
> one and the resultsets are manipulated directly in each "view". In fact
> there are only views, there is little seperation of business logic and
> presentation.
>
> The reason we need symfony for this new version is that we are going to be
> adding more advanced features that would "complicate" the product beyond
> what a procedural style would allow us to maintain. We are already
> struggling to keep the older system maintained and enhanced for our
> customers as it is. symfony, Propel and even Prototype with scriptaculous
> help alleviate these maintenance and extensibility issues.
>
>
> On Mon, Mar 9, 2009 at 9:13 PM, Richtermeister  wrote:
>
>>
>> Hi Gareth,
>>
>> after reading all this I feel your time is most likely best spent in
>> smart caching, since it sounds like the DB is not your bottleneck.
>> What's easy to overlook when working with symfony, is that compared to
>> straight procedural "get data -> display data" scripts, rendering
>> templates with hydrated objects is slower, albeit more flexible. So,
>> if your previous site was coded in a bad way, it was probably using a
>> lot of "view specific" code, so it's hard to compete with that on pure
>> speed considerations. The only way to mitigate that is by using all
>> forms of caching, and yes, this may include the function cache
>> (although I don't like it much). However, the "higher up" you can
>> cache, i.e. a complete action, the less you have to cache on the model
>> level.
>>
>> Just my 2 cents. Good luck,
>> Daniel
>>
>>
>>
>>
>>
>> On Mar 9, 8:42 am, Sumedh  wrote:
>> > My 2 cents...slow query log in mysql should help a lot...
>> >
>> > Please let us know your insights at the end of your exercise... :)
>> >
>> > On Mar 9, 3:41 pm, Gareth McCumskey  wrote:
>> >
>> > > I just tried using Propel 1.3 on our application and while I would
>> l

[symfony-users] Re: Symfony Production Performance improvements

2009-03-09 Thread Gareth McCumskey
Thanks for that. I have actually been looking at the function cache amongst
others and there is a lot we can do there as our DB records, once inserted,
are not likely to change. In fact if they do it means we are having a
problem as we store email data for a number of companies in them. Therefore
function caching and even memory caching records as we extract them from db
would probably help us a lot. It does mean more work code-wise and isn't a
"quick-fix", so we plan to start looking at this once we hit Beta where
performance will be a major requirement.

The old system is faster simply because it follows no design pattern except
procedural and that is where its speed lies. There are no ORM's, classes or
anything like that, and SQL queries are sent straight through to the
database using handcoded, dynamic SQL queries as opposed to an ORM generated
one and the resultsets are manipulated directly in each "view". In fact
there are only views, there is little seperation of business logic and
presentation.

The reason we need symfony for this new version is that we are going to be
adding more advanced features that would "complicate" the product beyond
what a procedural style would allow us to maintain. We are already
struggling to keep the older system maintained and enhanced for our
customers as it is. symfony, Propel and even Prototype with scriptaculous
help alleviate these maintenance and extensibility issues.

On Mon, Mar 9, 2009 at 9:13 PM, Richtermeister  wrote:

>
> Hi Gareth,
>
> after reading all this I feel your time is most likely best spent in
> smart caching, since it sounds like the DB is not your bottleneck.
> What's easy to overlook when working with symfony, is that compared to
> straight procedural "get data -> display data" scripts, rendering
> templates with hydrated objects is slower, albeit more flexible. So,
> if your previous site was coded in a bad way, it was probably using a
> lot of "view specific" code, so it's hard to compete with that on pure
> speed considerations. The only way to mitigate that is by using all
> forms of caching, and yes, this may include the function cache
> (although I don't like it much). However, the "higher up" you can
> cache, i.e. a complete action, the less you have to cache on the model
> level.
>
> Just my 2 cents. Good luck,
> Daniel
>
>
>
>
>
> On Mar 9, 8:42 am, Sumedh  wrote:
> > My 2 cents...slow query log in mysql should help a lot...
> >
> > Please let us know your insights at the end of your exercise... :)
> >
> > On Mar 9, 3:41 pm, Gareth McCumskey  wrote:
> >
> > > I just tried using Propel 1.3 on our application and while I would love
> to
> > > continue using it (as it seemed to produce a little more efficieny) we
> can't
> > > use it for now because the servers that the app will run on are Centos
> 4
> > > with PHP 5.1.x as its maximum version for now. The sysadmins here say
> that
> > > to force an upgrade to 5.2.x would be a hard task as to retain RedHat
> > > support it means they would need to upgrade to Centos 5.
> >
> > > I am currently looking at the chapter about Optimising symfony and the
> > > function cache seems to be something we cna consider doing in a lot of
> our
> > > model calls from the action to help speed things up, especially for
> model
> > > methods that access historical data (i.e. stuff dated in the past that
> > > obviously wont change on subsequent calls) but these are relatively
> large
> > > coding changes which we will probably only do during our beta
> development
> > > phase.
> >
> > > I am still looking through more advise recieved from this post and I
> have to
> > > thank everyone for their input. I honestly didn't expect this response
> and
> > > it has been fantastic and very helpful.
> >
> > > On Mon, Mar 9, 2009 at 12:49 AM, Crafty_Shadow 
> wrote:
> >
> > > > Symfony 1.1 came by default with Propel 1.2
> > > > You can try upgrading to 1.3 (it isn't really a trivial task, but it
> > > > shouldn't be a big problem)
> > > > There is thorough explanation on the symfony site how to do it:
> > > >http://www.symfony-project.org/cookbook/1_1/en/propel_13
> > > > It should fare a measurable increase in performance. Also, a site
> that
> > > > makes good use of cache should have caching for absolutely everything
> > > > not session-dependent. I find it hard to imagine a php app, no matter
> > > > how fast, that would run faster than symfony's cached output.
> >
> > > > Alvaro:
> > > > Is your plugin based on Propel 1.3?
> > > > If you believe you have made significant improvements to Propel, why
> > > > not suggest them for version 2.0, which is still under heavy
> > > > development?
> >
> > > > On Mar 8, 4:33 pm, alvaro  wrote:
> > > > > At the company I developed a symfony plugin to optimize the Propel
> > > > > queries and also the Propel hydrate method, improving even 5 times
> > > > > query speed and also memory usage.
> >
> > > > > The plugins supports joins and thanks to PHP features the plugin
> > >

[symfony-users] Re: Symfony Production Performance improvements

2009-03-09 Thread Richtermeister

Hi Gareth,

after reading all this I feel your time is most likely best spent in
smart caching, since it sounds like the DB is not your bottleneck.
What's easy to overlook when working with symfony, is that compared to
straight procedural "get data -> display data" scripts, rendering
templates with hydrated objects is slower, albeit more flexible. So,
if your previous site was coded in a bad way, it was probably using a
lot of "view specific" code, so it's hard to compete with that on pure
speed considerations. The only way to mitigate that is by using all
forms of caching, and yes, this may include the function cache
(although I don't like it much). However, the "higher up" you can
cache, i.e. a complete action, the less you have to cache on the model
level.

Just my 2 cents. Good luck,
Daniel





On Mar 9, 8:42 am, Sumedh  wrote:
> My 2 cents...slow query log in mysql should help a lot...
>
> Please let us know your insights at the end of your exercise... :)
>
> On Mar 9, 3:41 pm, Gareth McCumskey  wrote:
>
> > I just tried using Propel 1.3 on our application and while I would love to
> > continue using it (as it seemed to produce a little more efficieny) we can't
> > use it for now because the servers that the app will run on are Centos 4
> > with PHP 5.1.x as its maximum version for now. The sysadmins here say that
> > to force an upgrade to 5.2.x would be a hard task as to retain RedHat
> > support it means they would need to upgrade to Centos 5.
>
> > I am currently looking at the chapter about Optimising symfony and the
> > function cache seems to be something we cna consider doing in a lot of our
> > model calls from the action to help speed things up, especially for model
> > methods that access historical data (i.e. stuff dated in the past that
> > obviously wont change on subsequent calls) but these are relatively large
> > coding changes which we will probably only do during our beta development
> > phase.
>
> > I am still looking through more advise recieved from this post and I have to
> > thank everyone for their input. I honestly didn't expect this response and
> > it has been fantastic and very helpful.
>
> > On Mon, Mar 9, 2009 at 12:49 AM, Crafty_Shadow  wrote:
>
> > > Symfony 1.1 came by default with Propel 1.2
> > > You can try upgrading to 1.3 (it isn't really a trivial task, but it
> > > shouldn't be a big problem)
> > > There is thorough explanation on the symfony site how to do it:
> > >http://www.symfony-project.org/cookbook/1_1/en/propel_13
> > > It should fare a measurable increase in performance. Also, a site that
> > > makes good use of cache should have caching for absolutely everything
> > > not session-dependent. I find it hard to imagine a php app, no matter
> > > how fast, that would run faster than symfony's cached output.
>
> > > Alvaro:
> > > Is your plugin based on Propel 1.3?
> > > If you believe you have made significant improvements to Propel, why
> > > not suggest them for version 2.0, which is still under heavy
> > > development?
>
> > > On Mar 8, 4:33 pm, alvaro  wrote:
> > > > At the company I developed a symfony plugin to optimize the Propel
> > > > queries and also the Propel hydrate method, improving even 5 times
> > > > query speed and also memory usage.
>
> > > > The plugins supports joins and thanks to PHP features the plugin
> > > > returns Propel objects populated with custom AS columns.
>
> > > > We are thinking on release it on the following weeks so stay tuned :)
>
> > > > Regards,
>
> > > > Alvaro
>
> > > > On Mar 8, 2009, at 10:20 PM, Gareth McCumskey wrote:
>
> > > > > We have put numerous caching techniques into effect, from Cache-
> > > > > Expires headers to compression of static files like js and html
> > > > > files. Currently we use symfony 1.1 and Propel as the ORM. We have
> > > > > identified the bottleneck generally as being the application
> > > > > processing after the db queries have run to extract the data.
>
> > > > > The entire point of my question was to get some info on general tips
> > > > > and tricks we can try out to see if anything helps or if perhaps we
> > > > > have missed any obvious issues that may actually be the cause of the
> > > > > slow performance we are getting. As it is I have gotten quite a few
> > > > > and look forward to getting into the office tomorrow to try them
> > > > > out. Anymore is greatly appreciated.
>
> > > > > Of course I am looking through the code to see if there is anyway we
> > > > > can streamline it on that end, but every little bit helps.
>
> > > > > Gareth
>
> > > > > On Sun, Mar 8, 2009 at 12:27 PM, Crafty_Shadow 
> > > > > wrote:
>
> > > > > Gareth, you didn't mention what version of symfony you were using,
> > > > > also what ORM (if any).
> > > > > The best course of optimization will depend on those. Also, as already
> > > > > mentioned, caching is your best friend.
>
> > > > > On Mar 8, 9:43 am, Gareth McCumskey  wrote:
> > > > > > Well, consider a single database table that look

[symfony-users] Re: Symfony Production Performance improvements

2009-03-09 Thread Sumedh

My 2 cents...slow query log in mysql should help a lot...

Please let us know your insights at the end of your exercise... :)

On Mar 9, 3:41 pm, Gareth McCumskey  wrote:
> I just tried using Propel 1.3 on our application and while I would love to
> continue using it (as it seemed to produce a little more efficieny) we can't
> use it for now because the servers that the app will run on are Centos 4
> with PHP 5.1.x as its maximum version for now. The sysadmins here say that
> to force an upgrade to 5.2.x would be a hard task as to retain RedHat
> support it means they would need to upgrade to Centos 5.
>
> I am currently looking at the chapter about Optimising symfony and the
> function cache seems to be something we cna consider doing in a lot of our
> model calls from the action to help speed things up, especially for model
> methods that access historical data (i.e. stuff dated in the past that
> obviously wont change on subsequent calls) but these are relatively large
> coding changes which we will probably only do during our beta development
> phase.
>
> I am still looking through more advise recieved from this post and I have to
> thank everyone for their input. I honestly didn't expect this response and
> it has been fantastic and very helpful.
>
> On Mon, Mar 9, 2009 at 12:49 AM, Crafty_Shadow  wrote:
>
> > Symfony 1.1 came by default with Propel 1.2
> > You can try upgrading to 1.3 (it isn't really a trivial task, but it
> > shouldn't be a big problem)
> > There is thorough explanation on the symfony site how to do it:
> >http://www.symfony-project.org/cookbook/1_1/en/propel_13
> > It should fare a measurable increase in performance. Also, a site that
> > makes good use of cache should have caching for absolutely everything
> > not session-dependent. I find it hard to imagine a php app, no matter
> > how fast, that would run faster than symfony's cached output.
>
> > Alvaro:
> > Is your plugin based on Propel 1.3?
> > If you believe you have made significant improvements to Propel, why
> > not suggest them for version 2.0, which is still under heavy
> > development?
>
> > On Mar 8, 4:33 pm, alvaro  wrote:
> > > At the company I developed a symfony plugin to optimize the Propel
> > > queries and also the Propel hydrate method, improving even 5 times
> > > query speed and also memory usage.
>
> > > The plugins supports joins and thanks to PHP features the plugin
> > > returns Propel objects populated with custom AS columns.
>
> > > We are thinking on release it on the following weeks so stay tuned :)
>
> > > Regards,
>
> > > Alvaro
>
> > > On Mar 8, 2009, at 10:20 PM, Gareth McCumskey wrote:
>
> > > > We have put numerous caching techniques into effect, from Cache-
> > > > Expires headers to compression of static files like js and html
> > > > files. Currently we use symfony 1.1 and Propel as the ORM. We have
> > > > identified the bottleneck generally as being the application
> > > > processing after the db queries have run to extract the data.
>
> > > > The entire point of my question was to get some info on general tips
> > > > and tricks we can try out to see if anything helps or if perhaps we
> > > > have missed any obvious issues that may actually be the cause of the
> > > > slow performance we are getting. As it is I have gotten quite a few
> > > > and look forward to getting into the office tomorrow to try them
> > > > out. Anymore is greatly appreciated.
>
> > > > Of course I am looking through the code to see if there is anyway we
> > > > can streamline it on that end, but every little bit helps.
>
> > > > Gareth
>
> > > > On Sun, Mar 8, 2009 at 12:27 PM, Crafty_Shadow 
> > > > wrote:
>
> > > > Gareth, you didn't mention what version of symfony you were using,
> > > > also what ORM (if any).
> > > > The best course of optimization will depend on those. Also, as already
> > > > mentioned, caching is your best friend.
>
> > > > On Mar 8, 9:43 am, Gareth McCumskey  wrote:
> > > > > Well, consider a single database table that looks something like
> > > > this:
>
> > > > > From_address
> > > > > to_address (possibly multiple addresses comma-seperated)
> > > > > headers
> > > > > spam_report
> > > > > subject
>
> > > > > And we would have millions of those records in the database.
> > > > Repeated
> > > > > entries, especially on to_address, means the data is hugely
> > > > redundant. By
> > > > > normalising we are turning a text search across millions of
> > > > records with
> > > > > redundant repeated data into a text search over a unique list,
> > > > then an
> > > > > integer search over primary key (which of course is indexed).
>
> > > > > On Sun, Mar 8, 2009 at 9:37 AM, Lawrence Krubner
> > > > wrote:
>
> > > > > > On Mar 8, 3:26 am, Gareth McCumskey  wrote:
> > > > > > > We had a speed increase because we had a lot of text searches
> > > > in the old
> > > > > > > system, all going through text fields where the same values
> > > > were repeated
> > > > > > > over and over. Its therefor

[symfony-users] Re: Symfony Production Performance improvements

2009-03-09 Thread Gareth McCumskey
I just tried using Propel 1.3 on our application and while I would love to
continue using it (as it seemed to produce a little more efficieny) we can't
use it for now because the servers that the app will run on are Centos 4
with PHP 5.1.x as its maximum version for now. The sysadmins here say that
to force an upgrade to 5.2.x would be a hard task as to retain RedHat
support it means they would need to upgrade to Centos 5.

I am currently looking at the chapter about Optimising symfony and the
function cache seems to be something we cna consider doing in a lot of our
model calls from the action to help speed things up, especially for model
methods that access historical data (i.e. stuff dated in the past that
obviously wont change on subsequent calls) but these are relatively large
coding changes which we will probably only do during our beta development
phase.

I am still looking through more advise recieved from this post and I have to
thank everyone for their input. I honestly didn't expect this response and
it has been fantastic and very helpful.

On Mon, Mar 9, 2009 at 12:49 AM, Crafty_Shadow  wrote:

>
> Symfony 1.1 came by default with Propel 1.2
> You can try upgrading to 1.3 (it isn't really a trivial task, but it
> shouldn't be a big problem)
> There is thorough explanation on the symfony site how to do it:
> http://www.symfony-project.org/cookbook/1_1/en/propel_13
> It should fare a measurable increase in performance. Also, a site that
> makes good use of cache should have caching for absolutely everything
> not session-dependent. I find it hard to imagine a php app, no matter
> how fast, that would run faster than symfony's cached output.
>
> Alvaro:
> Is your plugin based on Propel 1.3?
> If you believe you have made significant improvements to Propel, why
> not suggest them for version 2.0, which is still under heavy
> development?
>
> On Mar 8, 4:33 pm, alvaro  wrote:
> > At the company I developed a symfony plugin to optimize the Propel
> > queries and also the Propel hydrate method, improving even 5 times
> > query speed and also memory usage.
> >
> > The plugins supports joins and thanks to PHP features the plugin
> > returns Propel objects populated with custom AS columns.
> >
> > We are thinking on release it on the following weeks so stay tuned :)
> >
> > Regards,
> >
> > Alvaro
> >
> > On Mar 8, 2009, at 10:20 PM, Gareth McCumskey wrote:
> >
> > > We have put numerous caching techniques into effect, from Cache-
> > > Expires headers to compression of static files like js and html
> > > files. Currently we use symfony 1.1 and Propel as the ORM. We have
> > > identified the bottleneck generally as being the application
> > > processing after the db queries have run to extract the data.
> >
> > > The entire point of my question was to get some info on general tips
> > > and tricks we can try out to see if anything helps or if perhaps we
> > > have missed any obvious issues that may actually be the cause of the
> > > slow performance we are getting. As it is I have gotten quite a few
> > > and look forward to getting into the office tomorrow to try them
> > > out. Anymore is greatly appreciated.
> >
> > > Of course I am looking through the code to see if there is anyway we
> > > can streamline it on that end, but every little bit helps.
> >
> > > Gareth
> >
> > > On Sun, Mar 8, 2009 at 12:27 PM, Crafty_Shadow 
> > > wrote:
> >
> > > Gareth, you didn't mention what version of symfony you were using,
> > > also what ORM (if any).
> > > The best course of optimization will depend on those. Also, as already
> > > mentioned, caching is your best friend.
> >
> > > On Mar 8, 9:43 am, Gareth McCumskey  wrote:
> > > > Well, consider a single database table that looks something like
> > > this:
> >
> > > > From_address
> > > > to_address (possibly multiple addresses comma-seperated)
> > > > headers
> > > > spam_report
> > > > subject
> >
> > > > And we would have millions of those records in the database.
> > > Repeated
> > > > entries, especially on to_address, means the data is hugely
> > > redundant. By
> > > > normalising we are turning a text search across millions of
> > > records with
> > > > redundant repeated data into a text search over a unique list,
> > > then an
> > > > integer search over primary key (which of course is indexed).
> >
> > > > On Sun, Mar 8, 2009 at 9:37 AM, Lawrence Krubner
> > > wrote:
> >
> > > > > On Mar 8, 3:26 am, Gareth McCumskey  wrote:
> > > > > > We had a speed increase because we had a lot of text searches
> > > in the old
> > > > > > system, all going through text fields where the same values
> > > were repeated
> > > > > > over and over. Its therefore a lot faster to search a much
> > > smaller table,
> > > > > > where the text fields are unique, and find the value once,
> > > then use an ID
> > > > > > comparison, being much faster to match integers than text.
> >
> > > > > In sounds like you got a speed boost from doing intelligent
> > > indexing.
> > > >

[symfony-users] Re: Symfony Production Performance improvements

2009-03-08 Thread alvaro

One of the ideas I have is to propose it to the Propel guys.

At the moment it works with Propel 1.2, but is not difficult to make  
it work with Propel 1.3

Regards,

Alvaro


On Mar 9, 2009, at 6:49 AM, Crafty_Shadow wrote:

>
> Symfony 1.1 came by default with Propel 1.2
> You can try upgrading to 1.3 (it isn't really a trivial task, but it
> shouldn't be a big problem)
> There is thorough explanation on the symfony site how to do it:
> http://www.symfony-project.org/cookbook/1_1/en/propel_13
> It should fare a measurable increase in performance. Also, a site that
> makes good use of cache should have caching for absolutely everything
> not session-dependent. I find it hard to imagine a php app, no matter
> how fast, that would run faster than symfony's cached output.
>
> Alvaro:
> Is your plugin based on Propel 1.3?
> If you believe you have made significant improvements to Propel, why
> not suggest them for version 2.0, which is still under heavy
> development?
>
> On Mar 8, 4:33 pm, alvaro  wrote:
>> At the company I developed a symfony plugin to optimize the Propel
>> queries and also the Propel hydrate method, improving even 5 times
>> query speed and also memory usage.
>>
>> The plugins supports joins and thanks to PHP features the plugin
>> returns Propel objects populated with custom AS columns.
>>
>> We are thinking on release it on the following weeks so stay tuned :)
>>
>> Regards,
>>
>> Alvaro
>>
>> On Mar 8, 2009, at 10:20 PM, Gareth McCumskey wrote:
>>
>>> We have put numerous caching techniques into effect, from Cache-
>>> Expires headers to compression of static files like js and html
>>> files. Currently we use symfony 1.1 and Propel as the ORM. We have
>>> identified the bottleneck generally as being the application
>>> processing after the db queries have run to extract the data.
>>
>>> The entire point of my question was to get some info on general tips
>>> and tricks we can try out to see if anything helps or if perhaps we
>>> have missed any obvious issues that may actually be the cause of the
>>> slow performance we are getting. As it is I have gotten quite a few
>>> and look forward to getting into the office tomorrow to try them
>>> out. Anymore is greatly appreciated.
>>
>>> Of course I am looking through the code to see if there is anyway we
>>> can streamline it on that end, but every little bit helps.
>>
>>> Gareth
>>
>>> On Sun, Mar 8, 2009 at 12:27 PM, Crafty_Shadow 
>>> wrote:
>>
>>> Gareth, you didn't mention what version of symfony you were using,
>>> also what ORM (if any).
>>> The best course of optimization will depend on those. Also, as  
>>> already
>>> mentioned, caching is your best friend.
>>
>>> On Mar 8, 9:43 am, Gareth McCumskey  wrote:
 Well, consider a single database table that looks something like
>>> this:
>>
 From_address
 to_address (possibly multiple addresses comma-seperated)
 headers
 spam_report
 subject
>>
 And we would have millions of those records in the database.
>>> Repeated
 entries, especially on to_address, means the data is hugely
>>> redundant. By
 normalising we are turning a text search across millions of
>>> records with
 redundant repeated data into a text search over a unique list,
>>> then an
 integer search over primary key (which of course is indexed).
>>
 On Sun, Mar 8, 2009 at 9:37 AM, Lawrence Krubner
>>> wrote:
>>
> On Mar 8, 3:26 am, Gareth McCumskey  wrote:
>> We had a speed increase because we had a lot of text searches
>>> in the old
>> system, all going through text fields where the same values
>>> were repeated
>> over and over. Its therefore a lot faster to search a much
>>> smaller table,
>> where the text fields are unique, and find the value once,
>>> then use an ID
>> comparison, being much faster to match integers than text.
>>
> In sounds like you got a speed boost from doing intelligent
>>> indexing.
> What you are describing sounds more like indexing than
>>> normalization,
> at least to me.
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-08 Thread Crafty_Shadow

Symfony 1.1 came by default with Propel 1.2
You can try upgrading to 1.3 (it isn't really a trivial task, but it
shouldn't be a big problem)
There is thorough explanation on the symfony site how to do it:
http://www.symfony-project.org/cookbook/1_1/en/propel_13
It should fare a measurable increase in performance. Also, a site that
makes good use of cache should have caching for absolutely everything
not session-dependent. I find it hard to imagine a php app, no matter
how fast, that would run faster than symfony's cached output.

Alvaro:
Is your plugin based on Propel 1.3?
If you believe you have made significant improvements to Propel, why
not suggest them for version 2.0, which is still under heavy
development?

On Mar 8, 4:33 pm, alvaro  wrote:
> At the company I developed a symfony plugin to optimize the Propel  
> queries and also the Propel hydrate method, improving even 5 times  
> query speed and also memory usage.
>
> The plugins supports joins and thanks to PHP features the plugin  
> returns Propel objects populated with custom AS columns.
>
> We are thinking on release it on the following weeks so stay tuned :)
>
> Regards,
>
> Alvaro
>
> On Mar 8, 2009, at 10:20 PM, Gareth McCumskey wrote:
>
> > We have put numerous caching techniques into effect, from Cache-
> > Expires headers to compression of static files like js and html  
> > files. Currently we use symfony 1.1 and Propel as the ORM. We have  
> > identified the bottleneck generally as being the application  
> > processing after the db queries have run to extract the data.
>
> > The entire point of my question was to get some info on general tips  
> > and tricks we can try out to see if anything helps or if perhaps we  
> > have missed any obvious issues that may actually be the cause of the  
> > slow performance we are getting. As it is I have gotten quite a few  
> > and look forward to getting into the office tomorrow to try them  
> > out. Anymore is greatly appreciated.
>
> > Of course I am looking through the code to see if there is anyway we  
> > can streamline it on that end, but every little bit helps.
>
> > Gareth
>
> > On Sun, Mar 8, 2009 at 12:27 PM, Crafty_Shadow   
> > wrote:
>
> > Gareth, you didn't mention what version of symfony you were using,
> > also what ORM (if any).
> > The best course of optimization will depend on those. Also, as already
> > mentioned, caching is your best friend.
>
> > On Mar 8, 9:43 am, Gareth McCumskey  wrote:
> > > Well, consider a single database table that looks something like  
> > this:
>
> > > From_address
> > > to_address (possibly multiple addresses comma-seperated)
> > > headers
> > > spam_report
> > > subject
>
> > > And we would have millions of those records in the database.  
> > Repeated
> > > entries, especially on to_address, means the data is hugely  
> > redundant. By
> > > normalising we are turning a text search across millions of  
> > records with
> > > redundant repeated data into a text search over a unique list,  
> > then an
> > > integer search over primary key (which of course is indexed).
>
> > > On Sun, Mar 8, 2009 at 9:37 AM, Lawrence Krubner  
> > wrote:
>
> > > > On Mar 8, 3:26 am, Gareth McCumskey  wrote:
> > > > > We had a speed increase because we had a lot of text searches  
> > in the old
> > > > > system, all going through text fields where the same values  
> > were repeated
> > > > > over and over. Its therefore a lot faster to search a much  
> > smaller table,
> > > > > where the text fields are unique, and find the value once,  
> > then use an ID
> > > > > comparison, being much faster to match integers than text.
>
> > > > In sounds like you got a speed boost from doing intelligent  
> > indexing.
> > > > What you are describing sounds more like indexing than  
> > normalization,
> > > > at least to me.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-08 Thread alvaro
At the company I developed a symfony plugin to optimize the Propel  
queries and also the Propel hydrate method, improving even 5 times  
query speed and also memory usage.

The plugins supports joins and thanks to PHP features the plugin  
returns Propel objects populated with custom AS columns.

We are thinking on release it on the following weeks so stay tuned :)

Regards,

Alvaro


On Mar 8, 2009, at 10:20 PM, Gareth McCumskey wrote:

> We have put numerous caching techniques into effect, from Cache- 
> Expires headers to compression of static files like js and html  
> files. Currently we use symfony 1.1 and Propel as the ORM. We have  
> identified the bottleneck generally as being the application  
> processing after the db queries have run to extract the data.
>
> The entire point of my question was to get some info on general tips  
> and tricks we can try out to see if anything helps or if perhaps we  
> have missed any obvious issues that may actually be the cause of the  
> slow performance we are getting. As it is I have gotten quite a few  
> and look forward to getting into the office tomorrow to try them  
> out. Anymore is greatly appreciated.
>
> Of course I am looking through the code to see if there is anyway we  
> can streamline it on that end, but every little bit helps.
>
> Gareth
>
> On Sun, Mar 8, 2009 at 12:27 PM, Crafty_Shadow   
> wrote:
>
> Gareth, you didn't mention what version of symfony you were using,
> also what ORM (if any).
> The best course of optimization will depend on those. Also, as already
> mentioned, caching is your best friend.
>
> On Mar 8, 9:43 am, Gareth McCumskey  wrote:
> > Well, consider a single database table that looks something like  
> this:
> >
> > From_address
> > to_address (possibly multiple addresses comma-seperated)
> > headers
> > spam_report
> > subject
> >
> > And we would have millions of those records in the database.  
> Repeated
> > entries, especially on to_address, means the data is hugely  
> redundant. By
> > normalising we are turning a text search across millions of  
> records with
> > redundant repeated data into a text search over a unique list,  
> then an
> > integer search over primary key (which of course is indexed).
> >
> > On Sun, Mar 8, 2009 at 9:37 AM, Lawrence Krubner  
> wrote:
> >
> >
> >
> > > On Mar 8, 3:26 am, Gareth McCumskey  wrote:
> > > > We had a speed increase because we had a lot of text searches  
> in the old
> > > > system, all going through text fields where the same values  
> were repeated
> > > > over and over. Its therefore a lot faster to search a much  
> smaller table,
> > > > where the text fields are unique, and find the value once,  
> then use an ID
> > > > comparison, being much faster to match integers than text.
> >
> > > In sounds like you got a speed boost from doing intelligent  
> indexing.
> > > What you are describing sounds more like indexing than  
> normalization,
> > > at least to me.
>
>
>
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-08 Thread Gareth McCumskey
We have do have a comparison benchmark. The older system we hope to replace
is running on the same box in parallel. So we have essentially elikinated
any factors that could bias the old system byt running the two in parallel
on the same box

On Sun, Mar 8, 2009 at 3:06 PM, Daniel  wrote:

>
> It would be handy to have a low level performance test to perform on
> the server setup.
>
> There are several bottlenecks that can be identified. That's all from
> kernel issues, apache configurations (as discussed), mysql
> configurations, symfony version, symfony configurations, application
> code, ui code, internet connection speed, web browser speed and so on.
>
> It would be interesting to achive some kind of enviroment test for the
> setup. I believe it would be quite hard to test all the mentioned
> parts in once. But what could be tested is the setup of apache, mysql
> and symfony as the "platform" from which almost all symfonians start
> their development. To have these parts tested trough a performance
> testing symfony application we could all achieve a sandbox that
> doesn't have any unnecessary bottlenecks. The rest of the total
> experience is more up to the developer due to the numerous ways to
> work with Symfony.
>
> Any ideas?
>
>
> /Daniel
>
>
> On Mar 8, 11:27 am, Crafty_Shadow  wrote:
> > Gareth, you didn't mention what version of symfony you were using,
> > also what ORM (if any).
> > The best course of optimization will depend on those. Also, as already
> > mentioned, caching is your best friend.
> >
> > On Mar 8, 9:43 am, Gareth McCumskey  wrote:
> >
> > > Well, consider a single database table that looks something like this:
> >
> > > From_address
> > > to_address (possibly multiple addresses comma-seperated)
> > > headers
> > > spam_report
> > > subject
> >
> > > And we would have millions of those records in the database. Repeated
> > > entries, especially on to_address, means the data is hugely redundant.
> By
> > > normalising we are turning a text search across millions of records
> with
> > > redundant repeated data into a text search over a unique list, then an
> > > integer search over primary key (which of course is indexed).
> >
> > > On Sun, Mar 8, 2009 at 9:37 AM, Lawrence Krubner <
> lkrub...@geocities.com>wrote:
> >
> > > > On Mar 8, 3:26 am, Gareth McCumskey  wrote:
> > > > > We had a speed increase because we had a lot of text searches in
> the old
> > > > > system, all going through text fields where the same values were
> repeated
> > > > > over and over. Its therefore a lot faster to search a much smaller
> table,
> > > > > where the text fields are unique, and find the value once, then use
> an ID
> > > > > comparison, being much faster to match integers than text.
> >
> > > > In sounds like you got a speed boost from doing intelligent indexing.
> > > > What you are describing sounds more like indexing than normalization,
> > > > at least to me.
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-08 Thread Gareth McCumskey
We have put numerous caching techniques into effect, from Cache-Expires
headers to compression of static files like js and html files. Currently we
use symfony 1.1 and Propel as the ORM. We have identified the bottleneck
generally as being the application processing after the db queries have run
to extract the data.

The entire point of my question was to get some info on general tips and
tricks we can try out to see if anything helps or if perhaps we have missed
any obvious issues that may actually be the cause of the slow performance we
are getting. As it is I have gotten quite a few and look forward to getting
into the office tomorrow to try them out. Anymore is greatly appreciated.

Of course I am looking through the code to see if there is anyway we can
streamline it on that end, but every little bit helps.

Gareth

On Sun, Mar 8, 2009 at 12:27 PM, Crafty_Shadow  wrote:

>
> Gareth, you didn't mention what version of symfony you were using,
> also what ORM (if any).
> The best course of optimization will depend on those. Also, as already
> mentioned, caching is your best friend.
>
> On Mar 8, 9:43 am, Gareth McCumskey  wrote:
> > Well, consider a single database table that looks something like this:
> >
> > From_address
> > to_address (possibly multiple addresses comma-seperated)
> > headers
> > spam_report
> > subject
> >
> > And we would have millions of those records in the database. Repeated
> > entries, especially on to_address, means the data is hugely redundant. By
> > normalising we are turning a text search across millions of records with
> > redundant repeated data into a text search over a unique list, then an
> > integer search over primary key (which of course is indexed).
> >
> > On Sun, Mar 8, 2009 at 9:37 AM, Lawrence Krubner  >wrote:
> >
> >
> >
> > > On Mar 8, 3:26 am, Gareth McCumskey  wrote:
> > > > We had a speed increase because we had a lot of text searches in the
> old
> > > > system, all going through text fields where the same values were
> repeated
> > > > over and over. Its therefore a lot faster to search a much smaller
> table,
> > > > where the text fields are unique, and find the value once, then use
> an ID
> > > > comparison, being much faster to match integers than text.
> >
> > > In sounds like you got a speed boost from doing intelligent indexing.
> > > What you are describing sounds more like indexing than normalization,
> > > at least to me.
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-08 Thread Daniel

It would be handy to have a low level performance test to perform on
the server setup.

There are several bottlenecks that can be identified. That's all from
kernel issues, apache configurations (as discussed), mysql
configurations, symfony version, symfony configurations, application
code, ui code, internet connection speed, web browser speed and so on.

It would be interesting to achive some kind of enviroment test for the
setup. I believe it would be quite hard to test all the mentioned
parts in once. But what could be tested is the setup of apache, mysql
and symfony as the "platform" from which almost all symfonians start
their development. To have these parts tested trough a performance
testing symfony application we could all achieve a sandbox that
doesn't have any unnecessary bottlenecks. The rest of the total
experience is more up to the developer due to the numerous ways to
work with Symfony.

Any ideas?


/Daniel


On Mar 8, 11:27 am, Crafty_Shadow  wrote:
> Gareth, you didn't mention what version of symfony you were using,
> also what ORM (if any).
> The best course of optimization will depend on those. Also, as already
> mentioned, caching is your best friend.
>
> On Mar 8, 9:43 am, Gareth McCumskey  wrote:
>
> > Well, consider a single database table that looks something like this:
>
> > From_address
> > to_address (possibly multiple addresses comma-seperated)
> > headers
> > spam_report
> > subject
>
> > And we would have millions of those records in the database. Repeated
> > entries, especially on to_address, means the data is hugely redundant. By
> > normalising we are turning a text search across millions of records with
> > redundant repeated data into a text search over a unique list, then an
> > integer search over primary key (which of course is indexed).
>
> > On Sun, Mar 8, 2009 at 9:37 AM, Lawrence Krubner 
> > wrote:
>
> > > On Mar 8, 3:26 am, Gareth McCumskey  wrote:
> > > > We had a speed increase because we had a lot of text searches in the old
> > > > system, all going through text fields where the same values were 
> > > > repeated
> > > > over and over. Its therefore a lot faster to search a much smaller 
> > > > table,
> > > > where the text fields are unique, and find the value once, then use an 
> > > > ID
> > > > comparison, being much faster to match integers than text.
>
> > > In sounds like you got a speed boost from doing intelligent indexing.
> > > What you are describing sounds more like indexing than normalization,
> > > at least to me.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-08 Thread Crafty_Shadow

Gareth, you didn't mention what version of symfony you were using,
also what ORM (if any).
The best course of optimization will depend on those. Also, as already
mentioned, caching is your best friend.

On Mar 8, 9:43 am, Gareth McCumskey  wrote:
> Well, consider a single database table that looks something like this:
>
> From_address
> to_address (possibly multiple addresses comma-seperated)
> headers
> spam_report
> subject
>
> And we would have millions of those records in the database. Repeated
> entries, especially on to_address, means the data is hugely redundant. By
> normalising we are turning a text search across millions of records with
> redundant repeated data into a text search over a unique list, then an
> integer search over primary key (which of course is indexed).
>
> On Sun, Mar 8, 2009 at 9:37 AM, Lawrence Krubner 
> wrote:
>
>
>
> > On Mar 8, 3:26 am, Gareth McCumskey  wrote:
> > > We had a speed increase because we had a lot of text searches in the old
> > > system, all going through text fields where the same values were repeated
> > > over and over. Its therefore a lot faster to search a much smaller table,
> > > where the text fields are unique, and find the value once, then use an ID
> > > comparison, being much faster to match integers than text.
>
> > In sounds like you got a speed boost from doing intelligent indexing.
> > What you are describing sounds more like indexing than normalization,
> > at least to me.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread Gareth McCumskey
Well, consider a single database table that looks something like this:

From_address
to_address (possibly multiple addresses comma-seperated)
headers
spam_report
subject


And we would have millions of those records in the database. Repeated
entries, especially on to_address, means the data is hugely redundant. By
normalising we are turning a text search across millions of records with
redundant repeated data into a text search over a unique list, then an
integer search over primary key (which of course is indexed).

On Sun, Mar 8, 2009 at 9:37 AM, Lawrence Krubner wrote:

>
>
>
> On Mar 8, 3:26 am, Gareth McCumskey  wrote:
> > We had a speed increase because we had a lot of text searches in the old
> > system, all going through text fields where the same values were repeated
> > over and over. Its therefore a lot faster to search a much smaller table,
> > where the text fields are unique, and find the value once, then use an ID
> > comparison, being much faster to match integers than text.
>
>
> In sounds like you got a speed boost from doing intelligent indexing.
> What you are describing sounds more like indexing than normalization,
> at least to me.
>
>
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread Lawrence Krubner



On Mar 8, 3:26 am, Gareth McCumskey  wrote:
> We had a speed increase because we had a lot of text searches in the old
> system, all going through text fields where the same values were repeated
> over and over. Its therefore a lot faster to search a much smaller table,
> where the text fields are unique, and find the value once, then use an ID
> comparison, being much faster to match integers than text.


In sounds like you got a speed boost from doing intelligent indexing.
What you are describing sounds more like indexing than normalization,
at least to me.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread Gareth McCumskey
We had a speed increase because we had a lot of text searches in the old
system, all going through text fields where the same values were repeated
over and over. Its therefore a lot faster to search a much smaller table,
where the text fields are unique, and find the value once, then use an ID
comparison, being much faster to match integers than text.

On Sun, Mar 8, 2009 at 6:58 AM, Lawrence Krubner wrote:

>
>
>
> On Mar 7, 2:06 am, Gareth McCumskey  wrote:
> > Greetings all,
> >
> > We have recently released a project we have been working on for some
> months
> > now as an Alpha version and while we have focussed primarily on bug
> fixing
> > as well as feature completion for the next Alpha release coming up in a
> > week, I can't help but notice something disconcerting.
> >
> > The project we have developed is a replacement of an existing product.
> The
> > previous version, coded before my time at the company, is old, procedural
> > and uses a very inefficient, un-normalised database structure.
> >
> > For our new version, we decided to use symfony for maintainability
> reasons
> > as well as the fact that this version will be a lot more complex than its
> > predecessor so symfony's ability to simplify the development helps us
> > immensely.
> >
> > The problem I have noticed is that the new symfony version seems to be
> > performing ... well ... badly. Loading pages on the new version takes a
> lot
> > longer, talkin 10-50 times longer than the previous version. I went so
> far
> > as to view the development logs and manually run SQL queries on our new
> > normalised database schema vs the old version un-normalised version and
> the
> > new schema performs batter by a factor of 100x so I know that it is
> > definitely not the database slowing things down.
>
> I'm surprised you got a speed boost by normalizing the database. It is
> often the other way around. The perfectly normalized database tends to
> require a lot of JOIN statements. A small degree of de-normalization
> can greatly improve performance. Of course, the great risk of de-
> normalization is that you are storing redundant data, and you may
> eventually end up with a situation where data in table A is different
> than data in table B, for a field that is suppose to hold identical
> data.
>
> It's rare to hear of a speed boost coming from normalizing.
>
>
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread Lawrence Krubner



On Mar 7, 2:06 am, Gareth McCumskey  wrote:
> Greetings all,
>
> We have recently released a project we have been working on for some months
> now as an Alpha version and while we have focussed primarily on bug fixing
> as well as feature completion for the next Alpha release coming up in a
> week, I can't help but notice something disconcerting.
>
> The project we have developed is a replacement of an existing product. The
> previous version, coded before my time at the company, is old, procedural
> and uses a very inefficient, un-normalised database structure.
>
> For our new version, we decided to use symfony for maintainability reasons
> as well as the fact that this version will be a lot more complex than its
> predecessor so symfony's ability to simplify the development helps us
> immensely.
>
> The problem I have noticed is that the new symfony version seems to be
> performing ... well ... badly. Loading pages on the new version takes a lot
> longer, talkin 10-50 times longer than the previous version. I went so far
> as to view the development logs and manually run SQL queries on our new
> normalised database schema vs the old version un-normalised version and the
> new schema performs batter by a factor of 100x so I know that it is
> definitely not the database slowing things down.

I'm surprised you got a speed boost by normalizing the database. It is
often the other way around. The perfectly normalized database tends to
require a lot of JOIN statements. A small degree of de-normalization
can greatly improve performance. Of course, the great risk of de-
normalization is that you are storing redundant data, and you may
eventually end up with a situation where data in table A is different
than data in table B, for a field that is suppose to hold identical
data.

It's rare to hear of a speed boost coming from normalizing.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread Lee Bolding

When it was suggested to me by my sysadmin he did actually use the  
term "scan" - however, I'm not convinced removing the file would stop  
Apache looking for one. But, I could be wrong - it *has* happened  
before ;)

Certainly, it doesn't make sense to leave you rewrite rules in it  
unless you can't put them into httpd.conf for some reason (shared  
hosting etc). But, if that's the case then you've found the solution  
to your speed problems - get a dedicated server ;)

On 7 Mar 2009, at 20:28, James Cauwelier wrote:

>
> I 've read that one before.  But I wonder... Is it the reading that is
> bad about .htaccess?  I would think that it is the scanning process
> that slows down the request.  When the usage of .htaccess is allowed,
> apache will check on every request if a .htaccess is present.  I you
> request an image, like www.domain.com/images/picture.jpg apache will
> look for a .htaccess in the folder 'images'.  If no such file is
> found, it will try to locate one in your web root.
>
> Disallowing the usage of .htaccess in your httpd.conf will eliminate
> this overhead as Lee suggested.
>
> This is actually a question as I am not sure of the statement above.
> Anybody got any experience with system administration?
>
> James
>
> On Mar 7, 8:42 pm, Lee Bolding  wrote:
>> I couldn't agree more.
>>
>> Did I already mention moving the .htaccess rules into the apache
>> httpd.conf? that way it doesn't get read on every single request...
>>
>> On 7 Mar 2009, at 15:00, James Cauwelier wrote:
>>
>>
>>
>>> Hi,
>>
>>> There are some good ideas in this post, but they don't have a lot of
>>> value if you haven 't identified your bottelenecks first.  Did you  
>>> run
>>> this website on an isolated VPS or dedicated server?  In that case,
>>> you could look at the processes taking the most resources.  If your
>>> database is in fact a big consumer, then caching could be really
>>> helpful.  But bear in mind that cache has to be created and if you  
>>> 're
>>> not satisfied with the uncached performance, then you should fix  
>>> that.
>>
>>> Are you using an ORM?  If yes, check if you 're not running too many
>>> queries per request?  Too many queries could be fixed by joining  
>>> with
>>> reference tables.  Sometimes you should use doSelectJoinX()  
>>> instead of
>>> doSelect () with Propel.
>>
>>> What 's your database schema like?  Do you use MySQL MyIsam or  
>>> innoDb
>>> tables?  If you are using MyIsam and you have a lot of traffic, then
>>> know that MyIsam uses table level locking.  If you update a row in
>>> your table, all other statements have to wait until the update/ 
>>> create
>>> is finished.  Identify your bottleneck and test in a production like
>>> environment.  Isolating one query is not really representative.  Run
>>> some benchmarks and simulate real usage.
>>
>>> Read 'Adding your own timer' on
>>> http://www.symfony-project.org/book/1_2/16-Application-Management-Too 
>>> ...
>>> Accumulate your time with every database query, to see how much time
>>> is spent by the database.  Also look at the timings in you query log
>>> from the debug bar.  Do they seem correct compared to your own
>>> findings when isolating a query?
>>
>>> A lot of RAM would indeed be nice, but pixelmeister is only  
>>> suggesting
>>> this because they seem to make extensive use of memcached.  If you  
>>> 're
>>> not memcaching, then more than 2GB of RAM won 't do you much good.
>>> (unless RAM is a bottleneck for some other reason, like heavy CRON
>>> jobs.  Again, identify your bottlenecks)
>>
>>> Marijn 's question is a very good one: "Are we talking about  
>>> perceived
>>> performance or actual performance?" (pttt ... Identify your
>>> bottleneck)
>>
>>> James
>>
>>> On 7 mrt, 14:33, pixelmeister  wrote:
 Hi,
>>
 i also had this kind of problems on a big site (120 Mysql Tables)
 with
 aprox. 10 page views a Day.
>>
 The best way to get the site running fast is:
>>
 Use caching with memcached (sfMemCached class)
 We had a lot of problems with the standard file cache and
 SqliteCache.
>>
 Use xcache or an other bytecode cachesystem like eAccelerator
>>
 Improve your mysql indexes manually. I couldn't find a proper way
 to do it
 with the schema.yml files.
>>
 Give your Server a lot, really a lot of RAM :-)
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread James Cauwelier

I 've read that one before.  But I wonder... Is it the reading that is
bad about .htaccess?  I would think that it is the scanning process
that slows down the request.  When the usage of .htaccess is allowed,
apache will check on every request if a .htaccess is present.  I you
request an image, like www.domain.com/images/picture.jpg apache will
look for a .htaccess in the folder 'images'.  If no such file is
found, it will try to locate one in your web root.

Disallowing the usage of .htaccess in your httpd.conf will eliminate
this overhead as Lee suggested.

This is actually a question as I am not sure of the statement above.
Anybody got any experience with system administration?

James

On Mar 7, 8:42 pm, Lee Bolding  wrote:
> I couldn't agree more.
>
> Did I already mention moving the .htaccess rules into the apache  
> httpd.conf? that way it doesn't get read on every single request...
>
> On 7 Mar 2009, at 15:00, James Cauwelier wrote:
>
>
>
> > Hi,
>
> > There are some good ideas in this post, but they don't have a lot of
> > value if you haven 't identified your bottelenecks first.  Did you run
> > this website on an isolated VPS or dedicated server?  In that case,
> > you could look at the processes taking the most resources.  If your
> > database is in fact a big consumer, then caching could be really
> > helpful.  But bear in mind that cache has to be created and if you 're
> > not satisfied with the uncached performance, then you should fix that.
>
> > Are you using an ORM?  If yes, check if you 're not running too many
> > queries per request?  Too many queries could be fixed by joining with
> > reference tables.  Sometimes you should use doSelectJoinX() instead of
> > doSelect () with Propel.
>
> > What 's your database schema like?  Do you use MySQL MyIsam or innoDb
> > tables?  If you are using MyIsam and you have a lot of traffic, then
> > know that MyIsam uses table level locking.  If you update a row in
> > your table, all other statements have to wait until the update/create
> > is finished.  Identify your bottleneck and test in a production like
> > environment.  Isolating one query is not really representative.  Run
> > some benchmarks and simulate real usage.
>
> > Read 'Adding your own timer' on
> >http://www.symfony-project.org/book/1_2/16-Application-Management-Too...
> > Accumulate your time with every database query, to see how much time
> > is spent by the database.  Also look at the timings in you query log
> > from the debug bar.  Do they seem correct compared to your own
> > findings when isolating a query?
>
> > A lot of RAM would indeed be nice, but pixelmeister is only suggesting
> > this because they seem to make extensive use of memcached.  If you 're
> > not memcaching, then more than 2GB of RAM won 't do you much good.
> > (unless RAM is a bottleneck for some other reason, like heavy CRON
> > jobs.  Again, identify your bottlenecks)
>
> > Marijn 's question is a very good one: "Are we talking about perceived
> > performance or actual performance?" (pttt ... Identify your
> > bottleneck)
>
> > James
>
> > On 7 mrt, 14:33, pixelmeister  wrote:
> >> Hi,
>
> >> i also had this kind of problems on a big site (120 Mysql Tables)  
> >> with
> >> aprox. 10 page views a Day.
>
> >> The best way to get the site running fast is:
>
> >> Use caching with memcached (sfMemCached class)
> >> We had a lot of problems with the standard file cache and  
> >> SqliteCache.
>
> >> Use xcache or an other bytecode cachesystem like eAccelerator
>
> >> Improve your mysql indexes manually. I couldn't find a proper way  
> >> to do it
> >> with the schema.yml files.
>
> >> Give your Server a lot, really a lot of RAM :-)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread Lee Bolding

I couldn't agree more.

Did I already mention moving the .htaccess rules into the apache  
httpd.conf? that way it doesn't get read on every single request...

On 7 Mar 2009, at 15:00, James Cauwelier wrote:

>
> Hi,
>
>
> There are some good ideas in this post, but they don't have a lot of
> value if you haven 't identified your bottelenecks first.  Did you run
> this website on an isolated VPS or dedicated server?  In that case,
> you could look at the processes taking the most resources.  If your
> database is in fact a big consumer, then caching could be really
> helpful.  But bear in mind that cache has to be created and if you 're
> not satisfied with the uncached performance, then you should fix that.
>
> Are you using an ORM?  If yes, check if you 're not running too many
> queries per request?  Too many queries could be fixed by joining with
> reference tables.  Sometimes you should use doSelectJoinX() instead of
> doSelect () with Propel.
>
> What 's your database schema like?  Do you use MySQL MyIsam or innoDb
> tables?  If you are using MyIsam and you have a lot of traffic, then
> know that MyIsam uses table level locking.  If you update a row in
> your table, all other statements have to wait until the update/create
> is finished.  Identify your bottleneck and test in a production like
> environment.  Isolating one query is not really representative.  Run
> some benchmarks and simulate real usage.
>
> Read 'Adding your own timer' on
> http://www.symfony-project.org/book/1_2/16-Application-Management-Tools#chapter_16_sub_web_debug_toolbar
> Accumulate your time with every database query, to see how much time
> is spent by the database.  Also look at the timings in you query log
> from the debug bar.  Do they seem correct compared to your own
> findings when isolating a query?
>
> A lot of RAM would indeed be nice, but pixelmeister is only suggesting
> this because they seem to make extensive use of memcached.  If you 're
> not memcaching, then more than 2GB of RAM won 't do you much good.
> (unless RAM is a bottleneck for some other reason, like heavy CRON
> jobs.  Again, identify your bottlenecks)
>
> Marijn 's question is a very good one: "Are we talking about perceived
> performance or actual performance?" (pttt ... Identify your
> bottleneck)
>
>
> James
>
>
> On 7 mrt, 14:33, pixelmeister  wrote:
>> Hi,
>>
>> i also had this kind of problems on a big site (120 Mysql Tables)  
>> with
>> aprox. 10 page views a Day.
>>
>> The best way to get the site running fast is:
>>
>> Use caching with memcached (sfMemCached class)
>> We had a lot of problems with the standard file cache and  
>> SqliteCache.
>>
>> Use xcache or an other bytecode cachesystem like eAccelerator
>>
>> Improve your mysql indexes manually. I couldn't find a proper way  
>> to do it
>> with the schema.yml files.
>>
>> Give your Server a lot, really a lot of RAM :-)
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread James Cauwelier

Hi,


There are some good ideas in this post, but they don't have a lot of
value if you haven 't identified your bottelenecks first.  Did you run
this website on an isolated VPS or dedicated server?  In that case,
you could look at the processes taking the most resources.  If your
database is in fact a big consumer, then caching could be really
helpful.  But bear in mind that cache has to be created and if you 're
not satisfied with the uncached performance, then you should fix that.

Are you using an ORM?  If yes, check if you 're not running too many
queries per request?  Too many queries could be fixed by joining with
reference tables.  Sometimes you should use doSelectJoinX() instead of
doSelect () with Propel.

What 's your database schema like?  Do you use MySQL MyIsam or innoDb
tables?  If you are using MyIsam and you have a lot of traffic, then
know that MyIsam uses table level locking.  If you update a row in
your table, all other statements have to wait until the update/create
is finished.  Identify your bottleneck and test in a production like
environment.  Isolating one query is not really representative.  Run
some benchmarks and simulate real usage.

Read 'Adding your own timer' on
http://www.symfony-project.org/book/1_2/16-Application-Management-Tools#chapter_16_sub_web_debug_toolbar
Accumulate your time with every database query, to see how much time
is spent by the database.  Also look at the timings in you query log
from the debug bar.  Do they seem correct compared to your own
findings when isolating a query?

A lot of RAM would indeed be nice, but pixelmeister is only suggesting
this because they seem to make extensive use of memcached.  If you 're
not memcaching, then more than 2GB of RAM won 't do you much good.
(unless RAM is a bottleneck for some other reason, like heavy CRON
jobs.  Again, identify your bottlenecks)

Marijn 's question is a very good one: "Are we talking about perceived
performance or actual performance?" (pttt ... Identify your
bottleneck)


James


On 7 mrt, 14:33, pixelmeister  wrote:
> Hi,
>
> i also had this kind of problems on a big site (120 Mysql Tables) with
> aprox. 10 page views a Day.
>
> The best way to get the site running fast is:
>
> Use caching with memcached (sfMemCached class)
> We had a lot of problems with the standard file cache and SqliteCache.
>
> Use xcache or an other bytecode cachesystem like eAccelerator
>
> Improve your mysql indexes manually. I couldn't find a proper way to do it
> with the schema.yml files.
>
> Give your Server a lot, really a lot of RAM :-)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread pixelmeister
Hi,

i also had this kind of problems on a big site (120 Mysql Tables) with
aprox. 10 page views a Day.

The best way to get the site running fast is:

Use caching with memcached (sfMemCached class)
We had a lot of problems with the standard file cache and SqliteCache.

Use xcache or an other bytecode cachesystem like eAccelerator

Improve your mysql indexes manually. I couldn't find a proper way to do it
with the schema.yml files.

Give your Server a lot, really a lot of RAM :-)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread Thomas Rabaix
Hello,

Server side performance can be done check by reading symfony log :

   - check there not too many queries (use left join)
   - do not use too many include partial as every time a sfPartialHelper is
   created. Sometime it is just fine to do an include (not nice, but in a loop
   this can dramatically speed your application)
   - Use cache
   - Check the code source : try to remove loop, if you work with array then
   try to use references, it is better to copy the array elements

Client side performance :

   - read yahoo pages
   - move js at the end, avoid javascript document.write, merge js and css
   files, use cdn

Keep in mind symfony will be never quicker that plain php.

On Sat, Mar 7, 2009 at 11:49 AM, Marijn wrote:

>
> Are we talking about perceived performance or actual performance? Most
> of the time it is the front end that floors the perceived performance.
> Perhaps you should have a look at this Yahoo research, it can be very
> helpfull for performance improvements:
> http://developer.yahoo.com/performance/rules.html
>
> Besides that you can turn of/strip symfony core stuff you don't need.
>
> Kind regards,
>
> Marijn
>
> On Mar 7, 10:43 am, Jeremy Benoist  wrote:
> > Hi,
> >
> > Do you already take a look to :
> http://www.symfony-project.org/book/1_2/18-Performance
> > ?
> > Lots of good practice to learn in this page !
> >
> > Quick things I often use to improve performance :
> > - use sf cache a lot ! (but cleverly)
> > - use a php accelerator (APC, eAccelerator)
> > - use a mify js/css
> > - use a different cache than the sfFileCache (I often use
> > sfSQLiteCache)
> >
> > Good luck :-)
> >
> > Jeremy
> >
> > On 7 mar, 08:06, Gareth McCumskey  wrote:
> >
> >
> >
> > > Greetings all,
> >
> > > We have recently released a project we have been working on for some
> months
> > > now as an Alpha version and while we have focussed primarily on bug
> fixing
> > > as well as feature completion for the next Alpha release coming up in a
> > > week, I can't help but notice something disconcerting.
> >
> > > The project we have developed is a replacement of an existing product.
> The
> > > previous version, coded before my time at the company, is old,
> procedural
> > > and uses a very inefficient, un-normalised database structure.
> >
> > > For our new version, we decided to use symfony for maintainability
> reasons
> > > as well as the fact that this version will be a lot more complex than
> its
> > > predecessor so symfony's ability to simplify the development helps us
> > > immensely.
> >
> > > The problem I have noticed is that the new symfony version seems to be
> > > performing ... well ... badly. Loading pages on the new version takes a
> lot
> > > longer, talkin 10-50 times longer than the previous version. I went so
> far
> > > as to view the development logs and manually run SQL queries on our new
> > > normalised database schema vs the old version un-normalised version and
> the
> > > new schema performs batter by a factor of 100x so I know that it is
> > > definitely not the database slowing things down. I even installed
> > > eAccelerator and tested the PHP processing speeds after that but have
> noted
> > > no significant changes.
> >
> > > My question .. are there any perrformance enhancements for symfony on a
> > > production server that anyone can think of that might help the
> situation?
> > > Also, does using Ajax loaded div's contribute negatively to the
> performance
> > > issues?
> >
> > > Thanks and look forward to some tips :D
> >
> > > Gareth
> >
>


-- 
Thomas Rabaix
http://rabaix.net

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread Marijn

Are we talking about perceived performance or actual performance? Most
of the time it is the front end that floors the perceived performance.
Perhaps you should have a look at this Yahoo research, it can be very
helpfull for performance improvements: 
http://developer.yahoo.com/performance/rules.html

Besides that you can turn of/strip symfony core stuff you don't need.

Kind regards,

Marijn

On Mar 7, 10:43 am, Jeremy Benoist  wrote:
> Hi,
>
> Do you already take a look to 
> :http://www.symfony-project.org/book/1_2/18-Performance
> ?
> Lots of good practice to learn in this page !
>
> Quick things I often use to improve performance :
> - use sf cache a lot ! (but cleverly)
> - use a php accelerator (APC, eAccelerator)
> - use a mify js/css
> - use a different cache than the sfFileCache (I often use
> sfSQLiteCache)
>
> Good luck :-)
>
> Jeremy
>
> On 7 mar, 08:06, Gareth McCumskey  wrote:
>
>
>
> > Greetings all,
>
> > We have recently released a project we have been working on for some months
> > now as an Alpha version and while we have focussed primarily on bug fixing
> > as well as feature completion for the next Alpha release coming up in a
> > week, I can't help but notice something disconcerting.
>
> > The project we have developed is a replacement of an existing product. The
> > previous version, coded before my time at the company, is old, procedural
> > and uses a very inefficient, un-normalised database structure.
>
> > For our new version, we decided to use symfony for maintainability reasons
> > as well as the fact that this version will be a lot more complex than its
> > predecessor so symfony's ability to simplify the development helps us
> > immensely.
>
> > The problem I have noticed is that the new symfony version seems to be
> > performing ... well ... badly. Loading pages on the new version takes a lot
> > longer, talkin 10-50 times longer than the previous version. I went so far
> > as to view the development logs and manually run SQL queries on our new
> > normalised database schema vs the old version un-normalised version and the
> > new schema performs batter by a factor of 100x so I know that it is
> > definitely not the database slowing things down. I even installed
> > eAccelerator and tested the PHP processing speeds after that but have noted
> > no significant changes.
>
> > My question .. are there any perrformance enhancements for symfony on a
> > production server that anyone can think of that might help the situation?
> > Also, does using Ajax loaded div's contribute negatively to the performance
> > issues?
>
> > Thanks and look forward to some tips :D
>
> > Gareth
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---



[symfony-users] Re: Symfony Production Performance improvements

2009-03-07 Thread Jeremy Benoist

Hi,

Do you already take a look to : 
http://www.symfony-project.org/book/1_2/18-Performance
?
Lots of good practice to learn in this page !

Quick things I often use to improve performance :
- use sf cache a lot ! (but cleverly)
- use a php accelerator (APC, eAccelerator)
- use a mify js/css
- use a different cache than the sfFileCache (I often use
sfSQLiteCache)

Good luck :-)

Jeremy

On 7 mar, 08:06, Gareth McCumskey  wrote:
> Greetings all,
>
> We have recently released a project we have been working on for some months
> now as an Alpha version and while we have focussed primarily on bug fixing
> as well as feature completion for the next Alpha release coming up in a
> week, I can't help but notice something disconcerting.
>
> The project we have developed is a replacement of an existing product. The
> previous version, coded before my time at the company, is old, procedural
> and uses a very inefficient, un-normalised database structure.
>
> For our new version, we decided to use symfony for maintainability reasons
> as well as the fact that this version will be a lot more complex than its
> predecessor so symfony's ability to simplify the development helps us
> immensely.
>
> The problem I have noticed is that the new symfony version seems to be
> performing ... well ... badly. Loading pages on the new version takes a lot
> longer, talkin 10-50 times longer than the previous version. I went so far
> as to view the development logs and manually run SQL queries on our new
> normalised database schema vs the old version un-normalised version and the
> new schema performs batter by a factor of 100x so I know that it is
> definitely not the database slowing things down. I even installed
> eAccelerator and tested the PHP processing speeds after that but have noted
> no significant changes.
>
> My question .. are there any perrformance enhancements for symfony on a
> production server that anyone can think of that might help the situation?
> Also, does using Ajax loaded div's contribute negatively to the performance
> issues?
>
> Thanks and look forward to some tips :D
>
> Gareth
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"symfony users" group.
To post to this group, send email to symfony-users@googlegroups.com
To unsubscribe from this group, send email to 
symfony-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/symfony-users?hl=en
-~--~~~~--~~--~--~---