[PHP-DEV] [PATCH]

2010-01-23 Thread Yasser Kashfi
Salaam

I am using php in ARM. when I cross compile php-5.3.1, I seems that a linkage 
error occur. I solve this problem by edit 2 file below:
ext/standart/dl.h:
+#if defined(HAVE_LIBDL)
PHPAPI int php_load_extension(char *filename, int type, int start_now 
TSRMLS_DC);
+#endif

main/php_ini.c:
+#if defined(HAVE_LIBDL)
    php_load_extension(*((char **) arg), MODULE_PERSISTENT, 0 TSRMLS_CC);
+#else
+   zval *extension = (zval *) arg;
+   zval zval;
+
+    php_dl(extension, MODULE_PERSISTENT, zval, 0 TSRMLS_CC);
+#endif

Best regards.



  

Re: [PHP-DEV] About optimization

2010-01-23 Thread steve
 I doubt anyone does I1/D1/L2 cache profiling for PHP.

 I did a little bit of CPU cache profiling of PHP using oprofile, more
 out of curiosity than anything. It was a couple of years ago now.

 http://wikitech.wikimedia.org/view/Oprofile

 But you don't need oprofile, you can make changes based on theory, and
 then measure the execution time of the result.

I don't know where to go with that. I so much agree. Yet it so much
depends on what the theory is based on. But measurement and a decent
test matrix is key. valgrind/callgrind and the others can help.

Honestly, I think people should stay out of coding languages (either
interpreters or compilers, where interpreters are the more complex
case often best made easier by doing JIT) unless they have done CPU
design. It is not like the pre-386 days. These days CPUs are designed
for the compilers (either where they are, or likely where they will
be). The CPU designer decides how the compiler should operate. It is
their theories that matter. Not always -- just like in literature --
the author may create something that is beyond their own grasp and
best understood by others.

 PHP doesn't ship with an optimizer, byte code cache, or JIT.

 But community members are developing those things nonetheless.
 eAccelerator has an optimizer, there are several so-called byte code
 caches, and Roadsend is a promising compiler project.

Have been developing for a more than a decade... PHP4 was the last
time there was real performance improvements in PHP itself. The fact
that there are several so-called byte code caches does not equal a
good thing. It means that PHP is broken and lots of people are trying
to fix it. It also means that none have succeeded, as that would mean
that one of them would be included in the PHP core by now.

 Rasmus had the idea that it should
 do simple things and be easy, and if you were going to do anything
 else, then you should have the money to do so. Fair enough really.

 Rasmus is not the whole community.

Like a founder of a company that sets the corporate culture, don't
count Rasmus out so easily. Founders earn such power. Until they are
booted. It is not going to, nor should it, happen here.

The guys at Zend muscled in to change the culture as well, and have
succeeded to a large degree, pushing PHP into the enterprise by
offering a full version of PHP, not free of course. And thus the
reason for not having a byte code cache in the core. And the whole
optimizer which was their decoder part of their encoder project was
just making bad karma. Enough time has passed for a new round to
wrestle control. We'll see how the FBJIT goes. Which just goes to
show, if you really want something done, put some muscle into, take
over or fork. Or keep to yourself.

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DEV] About optimization

2010-01-23 Thread Rasmus Lerdorf
I think some of this discussion has been from very different interesting
angles.  Let me explain how I see and use PHP.

PHP is the frontend of your backend.  It is not your backend in any
sizable system.  By that I mean that PHP is not the place to play around
with large data sets.  Databases, Cassandra, Hadoop, memcache, etc.
serve that purpose.  If you need functionality on top of those backend
systems that manipulate millions or even thousands of rows on an
individual request, then you need to build it in something other than
PHP.  It also isn't for forking background processes, hence we have
technologies like Gearman.  PHP punts the hard stuff to technologies
designed to handle those tasks and instead focuses on the glue layers.

In the early days of PHP (1994-1996) I saw PHP as primarily a C API for
extending the web server without needing to know the internal workings
of the web server.  The macro-templating language was a cute feature
that let you expose the functionality you built in C as a set of
template tags.

The Web grew so fast and was initially mostly ignored by lower-level C
developers so the people tasked with building web sites didn't have the
background to write C code against the PHP API which caused the focus to
shift away from the API to bunch of canned tags that people commonly
needed/requested.

This hasn't changed that much over the years.  The templating language
has matured quite a bit and is now powerful enough to write extremely
complicated things in it.  But you still shouldn't write a database in
PHP.

This isn't about server costs.  It is about choosing the right tool for
the right part of the job.  A Javascript library for the client-side
frontend, PHP for the server-side frontend, C/C++ for your middle-layer
and an appropriate datastore behind it all and you can build amazing
things with PHP.  The largest destinations on the Web today are written
exactly like this.

This doesn't mean we shouldn't try to optimize PHP, and you will note
that APC is scheduled to be included in PHP 6, but there is always going
to be significant overhead incurred by a scripting language.  PHP 6
needs more room to store strings, for example, because we live in a
Unicode world.  And yes, there are obviously ways to reduce the overhead
with custom datatypes, but it makes things more complicated because, as
I said, PHP is glue.  By having a single datatype that all the
extensions understand, everything can talk to everything.  Once you
start moving away from the single zval approach towards different
datatypes for different purposes, you have to retrofit all existing
extensions to teach them how to treat these new datatypes and it makes
the already too-complicated extension API even more complicated which
would hurt the glue aspect of PHP.

-Rasmus

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DEV] About optimization

2010-01-23 Thread Tim Starling
steve wrote:
 Like a founder of a company that sets the corporate culture, don't
 count Rasmus out so easily. Founders earn such power. Until they are
 booted. It is not going to, nor should it, happen here.

I don't think PHP has an Ulrich Drepper or a Linus Torvalds. When I read
this list, I see Rasmus arguing on equal terms with other developers, he
doesn't arrogantly pull rank. I think that's good, because Rasmus is
very conservative, and I think PHP has a lot of potential that he
doesn't see.


Rasmus Lerdorf wrote:
 This isn't about server costs.  It is about choosing the right tool for
 the right part of the job.  A Javascript library for the client-side
 frontend, PHP for the server-side frontend, C/C++ for your middle-layer
 and an appropriate datastore behind it all and you can build amazing
 things with PHP.  The largest destinations on the Web today are written
 exactly like this.
   

That's not the world I live in. I work on a pure-PHP application which
is widely used on servers where the installing user does not have the
ability to change their php.ini or to install extensions or middleware.
The same application (with a few small extensions in C/C++) is used to
run one of the largest destinations on the Web. It all works just fine,
and you sell PHP short when you imply that it can't do this. We're not
going to fork MediaWiki just because you think it can't be done: it can
be done and we're doing it.

It all works beautifully: we have volunteers from the Wikimedia side,
and volunteers from the external installation side, and they work
together to develop features that are usable by both.

The small amount of money Wikimedia has comes mostly from individual
donors interested in seeing Wikipedia continue. It would be imprudent to
spend it all on software development without at least trying to attract
volunteers who, instead of donating money, can donate their time.

 And yes, there are obviously ways to reduce the overhead
 with custom datatypes, but it makes things more complicated because, as
 I said, PHP is glue.  By having a single datatype that all the
 extensions understand, everything can talk to everything.  Once you
 start moving away from the single zval approach towards different
 datatypes for different purposes, you have to retrofit all existing
 extensions to teach them how to treat these new datatypes and it makes
 the already too-complicated extension API even more complicated which
 would hurt the glue aspect of PHP.

   

Quite so, but I didn't actually suggest anything which would break
source compatibility with the bulk of extensions.

* I suggested having a vector-like mode for hashtables. This could be
implemented while maintaining compatibility with the usual insert, find
and iteration macros and functions. Only extensions which access the
HashTable structure directly, such as ext/standard/array.c, would need
changes.

* I suggested having a more compact, variable-length zend_op. There are
very few extensions that access zend_op, just things like reflection,
APC and parsekit.

* I suggested compact object storage for objects which have the same set
of member variables as their class declaration. This could probably be
implemented in the default handlers without touching the rest of the code.

The original poster suggested an optimisation pass post-compile, which
obviously doesn't break anything because there's extensions that do it
already. So I don't know who you're arguing against.

-- Tim Starling

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DEV] About optimization

2010-01-23 Thread Rasmus Lerdorf
Tim Starling wrote:
 That's not the world I live in. I work on a pure-PHP application which
 is widely used on servers where the installing user does not have the
 ability to change their php.ini or to install extensions or middleware.
 The same application (with a few small extensions in C/C++) is used to
 run one of the largest destinations on the Web. It all works just fine,
 and you sell PHP short when you imply that it can't do this. We're not
 going to fork MediaWiki just because you think it can't be done: it can
 be done and we're doing it.

But aren't the people who have large installs also likely to be running
on something slightly beyond an $10/month shared hosting account?  I bet
a mediawiki extension would be quite popular with the dedicated server
users along with all the slicehost/linode folks who splurge and pay
$40/month for their hosting.

 The original poster suggested an optimisation pass post-compile, which
 obviously doesn't break anything because there's extensions that do it
 already. So I don't know who you're arguing against.

I'm not arguing against anything, simply explaining how we got here.  I
am all for optimizations that don't break everything.  In the case of a
post-compile optimization pass nobody has been able to write one that
could speed up normal code without caching the optimized opcodes.  We do
have pecl/optimizer which works in conjunction with APC.  It can easily
be made to work without APC, but there isn't much point since the pass
always takes longer than the execution time it can save unless the code
being optimized is absolutely horrendous.

We have also played with some of your other ideas in the past, but I
suppose most of the core devs are somewhat spoiled by not needing to run
an entire Wikipedia clone on a $10 shared hosting account.  All I can
say on this is, send some patches to the list.  PHP improves through code.

-Rasmus

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php