php-general Digest 21 Dec 2009 02:43:37 -0000 Issue 6498

Topics (messages 300601 through 300620):

Re: PHP and SEO Workarounds
        300601 by: TG
        300602 by: Gautam Bhatia
        300603 by: Ashley Sheridan

Re: Best ajax library
        300604 by: Ali Asghar Toraby Parizy

OOP Design Question
        300605 by: Daniel Kolbo
        300607 by: Robert Cummings
        300609 by: Larry Garfield
        300616 by: Larry Garfield

efficiency of include()
        300606 by: Daniel Kolbo
        300610 by: Larry Garfield
        300613 by: Daniel Kolbo
        300615 by: Nathan Rixham

File and Directory Ownership Question
        300608 by: Al
        300611 by: Ashley Sheridan
        300612 by: Al

Re: Checking for internet connection.
        300614 by: Andy Shellam

SQL Queries
        300617 by: דניאל דנון
        300618 by: דניאל דנון
        300619 by: Jonathan Tapicer

Form validation issue
        300620 by: Ernie Kemp

Administrivia:

To subscribe to the digest, e-mail:
        php-general-digest-subscr...@lists.php.net

To unsubscribe from the digest, e-mail:
        php-general-digest-unsubscr...@lists.php.net

To post to the list, e-mail:
        php-gene...@lists.php.net


----------------------------------------------------------------------
--- Begin Message ---
Look into using your web server's mod_rewrite functionality to alter the 
URL that the search engines see.  For an example, look at Wordpress' 
permalinks or Joomla's SEF (search engine friendly) URLs.

What this will do is make all your dynamic pages look more like static 
ones.  If all your PHP pages had unique names like "about.php" and 
"ourmission.php", then the file name sort of lends itself to SEO already, 
but that would be a pretty boring website, and probably one that doesn't 
need PHP, if that was the case.

The big issue are pages that take parameters:

domain.com/catalog.php?catid=55&prodid=23

You could turn this into:

domain.com/catalog/55/23

But that's still not very semantic and doesn't give search engines much to 
work with.  So you can massage it even further with something like:

domain.com/catalog/23_Footware/55_Happy_Bunny_Slippers

Kind of ugly, but you can use mod_rewrite to turn that into:    
domain.com/catalog.php?catid=23_Footware&prodid=55_Happy_Bunny_Sliippers  
 then use PHP to extract the 23 and 55 to display the proper data.

Depending on the variables you're passing, you could exclude the numbers 
(which do nothing for SEO) and use PHP to figure out what to display.

domain.com/catalog/Footware/Happy_Bunny_Slippers
or...
domain.com/Footware/Happy_Bunny_Slippers  (shorter URLs are favored by SEO 
as not having "too much information" and some search engines only looking 
at the first XX characters of the URL means that the shorter the URL, the 
more info you can pack into it).

With mod_rewrite, you can tell it to check to see if a file exists, and if 
it doesn't, process a more complicated rewrite.

So if the catalog is your main source of dynamic data, then you could leave 
the "catalog" part out and use the 
domain.com/Footware/Happy_Bunny_Slippers format.    If you had other 
dynamic data, you could leave the catalog in as a clue for the 
mod_rewrite as to what PHP script to use to process the rest of the 
parameters.


Another thing you can do is use the canonical tag to indicate which pages 
are duplicates.   You may not think you have duplicates, but a common 
example would be if you have a page, then you have the same page that's 
just sorted differently.  Or if you had a "print friendly" version of a 
page without menus and such.  The content is essentially the same (which 
search engines will see and think you're trying to stuff the ballot box 
and make your site look more relevant to that subject when really you're 
not).   So if you have:

domain.com/catalog.php?catid=55&prodid=23
and...
domain.com/catalog.php?catid=55&prodid=23&sort=alpha
or..
domain.com/catalog.php?catid=55&prodid=23&print=1

Then you'd want to set a canonical tag at the top of each of these listings 
that says:
domain.com/catalog.php?catid=55&prodid=23

(meaning that all these pages are the same as 
"domain.com/catalog.php?catid=55&prodid=23")

You may be saying that pages sorted differently are different output.  
True.. but it's all the same data in the end.  Even with pagination.



And the biggest thing you can do for SEO isn't really even PHP related.    
Have good semantic markup.  Make sure it all validates and has a proper 
doctype assigned.  And make your content as organic and human as 
possible.  Don't write content for machines and search engines... don't 
try to "write to keywords" by making your content all awkward by stuffing 
as many keywords in as possible.   The search engines are pretty smart.  
Write good content and write it for humans... make it relevant to the 
topic and your audience and the search engines will see that and deliver 
it to your audience higher in search rankings.


There are other things, but these are some of the big ones.

-TG




----- Original Message -----
From: Gautam Bhatia <mail2gautambha...@gmail.com>
To: php-gene...@lists.php.net
Date: Sun, 20 Dec 2009 12:15:45 -0500
Subject: [PHP] PHP and SEO Workarounds

> hey folks,
>                 This is in regards to SEO and PHP, From what i have
> read , most (Not all) the PHP Contents is dynamic , which makes it so
> powerfull , but it also means that chances of it being indexed in search
> engines are less , am i right in saying this ? . If so how do i optimize
> my site  for search engines which is being powered by PHP and content is
> Dynamic. Please guide in this regard. Thank you.
> 
> Regards,
> Gautam Bhatia
> mail2gautambha...@gmail.com
> 
> 

--- End Message ---
--- Begin Message ---
hey guys,
                  Thanks a lot everyone  for the links and suggestions ,
I will take a look into mod_rewrite and search engine friendly URL's To
make the site more efficient ,  


Regards,
Gautam bhatia.
mail2gautambha...@gmail.com




On Sun, 2009-12-20 at 12:44 +0000, Ashley Sheridan wrote:

> On Sun, 2009-12-20 at 12:15 -0500, Gautam Bhatia wrote: 
> 
> > hey folks,
> >                 This is in regards to SEO and PHP, From what i have
> > read , most (Not all) the PHP Contents is dynamic , which makes it so
> > powerfull , but it also means that chances of it being indexed in search
> > engines are less , am i right in saying this ? . If so how do i optimize
> > my site  for search engines which is being powered by PHP and content is
> > Dynamic. Please guide in this regard. Thank you.
> > 
> > Regards,
> > Gautam Bhatia
> > mail2gautambha...@gmail.com
> 
> 
> This is a point you'll see mentioned by most SEO 'experts' none of
> whom I've seen can provide any evidence of this. Just think about the
> last time you searched for the answer to a question online. I'd hazard
> a guess that the most popular results had dynamic URL's such as forum
> posts, etc.
> 
> While it is true that the URL of a page has some weight to the pages
> ranking, it's miniscule compared to the actual content of the page.
> There might be an issue with blogs though, the ones where new old
> posts get pushed to other pages. So a story that appears on
> blog.php?page=1 might tomorrow appear on page=2, and then page=3 the
> day after. If you have a setup like this, then you could benefit from
> making more static type links for the posts, like Wordpress does.
> 
> More important for your SEO is getting the content correctly marked
> up, i.e. headings in <h_> tags, alt text for images, etc. There's a
> good tool for checking these sorts of things at
> http://nibbler.silktide.com 
> 
> Thanks,
> Ash
> http://www.ashleysheridan.co.uk
> 
> 
> 
> 



--- End Message ---
--- Begin Message ---
On Sun, 2009-12-20 at 20:05 -0500, Gautam Bhatia wrote:

> hey guys,
>                   Thanks a lot everyone  for the links and suggestions ,
> I will take a look into mod_rewrite and search engine friendly URL's To
> make the site more efficient ,  
> 
> 
> Regards,
> Gautam bhatia.
> mail2gautambha...@gmail.com
> 
> 
> 
> 
> On Sun, 2009-12-20 at 12:44 +0000, Ashley Sheridan wrote:
> 
> > On Sun, 2009-12-20 at 12:15 -0500, Gautam Bhatia wrote: 
> > 
> > > hey folks,
> > >                 This is in regards to SEO and PHP, From what i have
> > > read , most (Not all) the PHP Contents is dynamic , which makes it so
> > > powerfull , but it also means that chances of it being indexed in search
> > > engines are less , am i right in saying this ? . If so how do i optimize
> > > my site  for search engines which is being powered by PHP and content is
> > > Dynamic. Please guide in this regard. Thank you.
> > > 
> > > Regards,
> > > Gautam Bhatia
> > > mail2gautambha...@gmail.com
> > 
> > 
> > This is a point you'll see mentioned by most SEO 'experts' none of
> > whom I've seen can provide any evidence of this. Just think about the
> > last time you searched for the answer to a question online. I'd hazard
> > a guess that the most popular results had dynamic URL's such as forum
> > posts, etc.
> > 
> > While it is true that the URL of a page has some weight to the pages
> > ranking, it's miniscule compared to the actual content of the page.
> > There might be an issue with blogs though, the ones where new old
> > posts get pushed to other pages. So a story that appears on
> > blog.php?page=1 might tomorrow appear on page=2, and then page=3 the
> > day after. If you have a setup like this, then you could benefit from
> > making more static type links for the posts, like Wordpress does.
> > 
> > More important for your SEO is getting the content correctly marked
> > up, i.e. headings in <h_> tags, alt text for images, etc. There's a
> > good tool for checking these sorts of things at
> > http://nibbler.silktide.com 
> > 
> > Thanks,
> > Ash
> > http://www.ashleysheridan.co.uk
> > 
> > 
> > 
> > 
> 
> 


Just to remind you, they really aren't any more friendly to search
engines, that's a myth. Do a search for a question and look at the URLs
of the results and you'll see for yourself.

Thanks,
Ash
http://www.ashleysheridan.co.uk



--- End Message ---
--- Begin Message ---
On Sat, Dec 19, 2009 at 7:13 PM, tedd <tedd.sperl...@gmail.com> wrote:

> I have chosen jquery at last. because:
>>
>> * jQuery has a huge number of plugins available for everything you could
>> imagine wanting to do online
>> * The information on the jQuery site is extremely well documented, with
>> many
>> examples
>> * jQuery does not extend the elements that it works on, which means that
>> JavaScript such as 'for(i in el){...}' will still work
>> * jQuery's CSS selector engine, Sizzle, is arguably the most complete and
>> the
>> quickest available
>>
>> reffer to jquery with php by Kae Verens
>> thanks for your helps frinds
>>
>
>
> Additional consideration.
>
> There are at least two books published re jQuery, which makes it very
> widespread and well documented.
>
> Cheers,
>
> tedd
> --
> -------
> http://sperling.com  http://ancientstones.com  http://earthstones.com
>
I agree with you.
"jQuery1.3 with Php <http://www.packtpub.com/jquery-1-3-with-php/book>" is
the best book I have seen so far. it is a good start point for Php
developers who want to learn jQuery.

--- End Message ---
--- Begin Message ---
Hello PHPers,

I have a collection of about 60 objects (class definitions).  They are
all very similar.  They all share a substantial % of the same core.  But
they all have slight variations as well.  The approach I took was to
make an abstract core class, and each of the 60 objects extends that
core.  This works, but...

Here's my problem, not every php/http request requires all 60 objects.
At this point, I do not know in advance which objects will be required,
so i include the class def of all 60 objects every time...  I don't like
this idea as it seems a 'bloated' approach.

So now i'm thinking instead i'll just have one object which has the
union of all the 60 objects' methods.  But i'm not too happy with this
either b/c (i) now each instantiated object is carrying around a lot of
unneccessary baggage, (ii) i lose modularity of code, and (iii) the code
does not make as much 'intuitive' sense.  For (iii), 'why does this
object have this method?' type questions another programmer would ask
(or me a year from now).  The answer would be 'efficiency concerns',
which i'm aware that you generally don't want to compromise code
readability for efficiency if avoidable.

Maybe this would be the perfect opportunity for the php autoload
functions...?

Thanks for your help/thoughts/comments,
dK
`

--- End Message ---
--- Begin Message ---
Set up autoloading:

    http://php.net/manual/en/language.oop5.autoload.php

Cheers,
Rob.


Daniel Kolbo wrote:
Hello PHPers,

I have a collection of about 60 objects (class definitions).  They are
all very similar.  They all share a substantial % of the same core.  But
they all have slight variations as well.  The approach I took was to
make an abstract core class, and each of the 60 objects extends that
core.  This works, but...

Here's my problem, not every php/http request requires all 60 objects.
At this point, I do not know in advance which objects will be required,
so i include the class def of all 60 objects every time...  I don't like
this idea as it seems a 'bloated' approach.

So now i'm thinking instead i'll just have one object which has the
union of all the 60 objects' methods.  But i'm not too happy with this
either b/c (i) now each instantiated object is carrying around a lot of
unneccessary baggage, (ii) i lose modularity of code, and (iii) the code
does not make as much 'intuitive' sense.  For (iii), 'why does this
object have this method?' type questions another programmer would ask
(or me a year from now).  The answer would be 'efficiency concerns',
which i'm aware that you generally don't want to compromise code
readability for efficiency if avoidable.

Maybe this would be the perfect opportunity for the php autoload
functions...?

Thanks for your help/thoughts/comments,
dK
`


--
http://www.interjinn.com
Application and Templating Framework for PHP

--- End Message ---
--- Begin Message ---
On Sunday 20 December 2009 10:35:56 am Daniel Kolbo wrote:
> Hello PHPers,
>
> I have a collection of about 60 objects (class definitions).  They are
> all very similar.  They all share a substantial % of the same core.  But
> they all have slight variations as well.  The approach I took was to
> make an abstract core class, and each of the 60 objects extends that
> core.  This works, but...
>
> Here's my problem, not every php/http request requires all 60 objects.
> At this point, I do not know in advance which objects will be required,
> so i include the class def of all 60 objects every time...  I don't like
> this idea as it seems a 'bloated' approach.
>
> So now i'm thinking instead i'll just have one object which has the
> union of all the 60 objects' methods.  But i'm not too happy with this
> either b/c (i) now each instantiated object is carrying around a lot of
> unneccessary baggage, (ii) i lose modularity of code, and (iii) the code
> does not make as much 'intuitive' sense.  For (iii), 'why does this
> object have this method?' type questions another programmer would ask
> (or me a year from now).  The answer would be 'efficiency concerns',
> which i'm aware that you generally don't want to compromise code
> readability for efficiency if avoidable.
>
> Maybe this would be the perfect opportunity for the php autoload
> functions...?
>
> Thanks for your help/thoughts/comments,
> dK
> `

Yep, this is a textbook case for a proper autoload setup.  And no, cramming 
all of the functionality into one mega class won't get you any efficiency.  In 
fact, it would be just as wasteful as loading all 60 classes even when you're 
only going to use 2; you're still loading up roughly the same amount of code.  
Parsing it as one mega class or one big parent with a few small child classes 
is about a break-even as far as performance goes, but the mega class is much 
poorer architecture.

-- 
Larry Garfield
la...@garfieldtech.com

--- End Message ---
--- Begin Message ---
On Sunday 20 December 2009 1:08:46 pm you wrote:

> >> Maybe this would be the perfect opportunity for the php autoload
> >> functions...?
> >>
> >> Thanks for your help/thoughts/comments,
> >> dK
> >> `
> >
> > Yep, this is a textbook case for a proper autoload setup.  And no,
> > cramming all of the functionality into one mega class won't get you any
> > efficiency.  In fact, it would be just as wasteful as loading all 60
> > classes even when you're only going to use 2; you're still loading up
> > roughly the same amount of code. Parsing it as one mega class or one big
> > parent with a few small child classes is about a break-even as far as
> > performance goes, but the mega class is much poorer architecture.
>
> Thanks for your insight.
>
> I could probably setup autoloading, but I wonder if I would do it
> 'properly'.  Do you have a link or reference that you'd recommend for
> howto do a 'proper autoload setup'?
>
> Thanks,
> dK

Well, there is no universal agreement on what a "proper" setup is. :-)  There 
is a group trying to establish a Java-like standard for all projects to use 
once they get to PHP 5.3 and namespaces, but there are still issues to work 
out and IMO it's not actually a good approach for many use cases.  I'd argue 
that "proper" depends in a large part on your specific use case.

The most important aspect of a good autoload mechanism, though, is that it's 
fast and extensible.  Use spl_autoload_register() instead of __autoload(), and 
make sure that you keep the runtime of your autoload callbacks to an absolute 
minimum.  (A DB hit per autoload, for instance, is a no-no.)

Beyond that, varies with your project.

-- 
Larry Garfield
la...@garfieldtech.com

--- End Message ---
--- Begin Message ---
Hello PHPers,

This is a two part question:

1) Is it faster to include one file with lots of code, or many separate
smaller individual files?  Assume the one massive file is merely the
concatenation of all the smaller individual files.  (I am assuming the
one massive file would be faster..., but i wanted to get confirmation).

2) Suppose php has to invoke the include function 100 times.  Suppose
all files are on average the same size and contain the same number of
instructions.  Would it be faster to include the same exact file 100
times as opposed to 100 different file names?  Basically, does the
engine/parser take any shortcuts if it notices that the file name has
already been included once?

I would test this, but i don't want to create hundreds of different files...

Thanks,
dK
`

--- End Message ---
--- Begin Message ---
On Sunday 20 December 2009 10:45:45 am Daniel Kolbo wrote:
> Hello PHPers,
>
> This is a two part question:
>
> 1) Is it faster to include one file with lots of code, or many separate
> smaller individual files?  Assume the one massive file is merely the
> concatenation of all the smaller individual files.  (I am assuming the
> one massive file would be faster..., but i wanted to get confirmation).

Conventional wisdom is that the one big file is faster, since it requires one 
disk I/O hit instead of several.  HOWEVER, if you're only using a small 
portion of that code then it could be faster to load only the code you really 
need.  Where the trade off is varies with your architecture, the amount of 
code, ad how good the disk caching of your OS is.

> 2) Suppose php has to invoke the include function 100 times.  Suppose
> all files are on average the same size and contain the same number of
> instructions.  Would it be faster to include the same exact file 100
> times as opposed to 100 different file names?  Basically, does the
> engine/parser take any shortcuts if it notices that the file name has
> already been included once?

I'm pretty sure that PHP will recognize that it's already parsed that file and 
keep the opcode caches in memory, so it needn't hit disk again.  I've not 
checked into that part of the engine, though, so I may be wrong there.

-- 
Larry Garfield
la...@garfieldtech.com

--- End Message ---
--- Begin Message ---
Larry Garfield wrote:
> On Sunday 20 December 2009 10:45:45 am Daniel Kolbo wrote:
>> Hello PHPers,
>>
>> This is a two part question:
>>
>> 1) Is it faster to include one file with lots of code, or many separate
>> smaller individual files?  Assume the one massive file is merely the
>> concatenation of all the smaller individual files.  (I am assuming the
>> one massive file would be faster..., but i wanted to get confirmation).
> 
> Conventional wisdom is that the one big file is faster, since it requires one 
> disk I/O hit instead of several.  HOWEVER, if you're only using a small 
> portion of that code then it could be faster to load only the code you really 
> need.  Where the trade off is varies with your architecture, the amount of 
> code, ad how good the disk caching of your OS is.
> 
>> 2) Suppose php has to invoke the include function 100 times.  Suppose
>> all files are on average the same size and contain the same number of
>> instructions.  Would it be faster to include the same exact file 100
>> times as opposed to 100 different file names?  Basically, does the
>> engine/parser take any shortcuts if it notices that the file name has
>> already been included once?
> 
> I'm pretty sure that PHP will recognize that it's already parsed that file 
> and 
> keep the opcode caches in memory, so it needn't hit disk again.  I've not 
> checked into that part of the engine, though, so I may be wrong there.
> 

Thanks for the reply.

For 2): I've often searched for php parsing documentation.  I love the
php.net documentation.  However, i have yet to find an excellent source
documenting the php parser/engine.  My searches always yield the zend
website, but it doesn't seem like i can get very far from that page.
Any suggestions on where i could learn more of the nitty gritty details
of the php/zend behaviours?

Thanks,
dK
`

--- End Message ---
--- Begin Message ---
Daniel Kolbo wrote:
> Larry Garfield wrote:
>> On Sunday 20 December 2009 10:45:45 am Daniel Kolbo wrote:
>>> Hello PHPers,
>>>
>>> This is a two part question:
>>>
>>> 1) Is it faster to include one file with lots of code, or many separate
>>> smaller individual files?  Assume the one massive file is merely the
>>> concatenation of all the smaller individual files.  (I am assuming the
>>> one massive file would be faster..., but i wanted to get confirmation).
>> Conventional wisdom is that the one big file is faster, since it requires 
>> one 
>> disk I/O hit instead of several.  HOWEVER, if you're only using a small 
>> portion of that code then it could be faster to load only the code you 
>> really 
>> need.  Where the trade off is varies with your architecture, the amount of 
>> code, ad how good the disk caching of your OS is.
>>
>>> 2) Suppose php has to invoke the include function 100 times.  Suppose
>>> all files are on average the same size and contain the same number of
>>> instructions.  Would it be faster to include the same exact file 100
>>> times as opposed to 100 different file names?  Basically, does the
>>> engine/parser take any shortcuts if it notices that the file name has
>>> already been included once?
>> I'm pretty sure that PHP will recognize that it's already parsed that file 
>> and 
>> keep the opcode caches in memory, so it needn't hit disk again.  I've not 
>> checked into that part of the engine, though, so I may be wrong there.
>>
> 
> Thanks for the reply.
> 
> For 2): I've often searched for php parsing documentation.  I love the
> php.net documentation.  However, i have yet to find an excellent source
> documenting the php parser/engine.  My searches always yield the zend
> website, but it doesn't seem like i can get very far from that page.
> Any suggestions on where i could learn more of the nitty gritty details
> of the php/zend behaviours?
> 
> Thanks,
> dK
> `

Daniel,

I'm only replying because I've been down this route of taking everything
in to consideration countless times now.

Code optimisation, sql and db optimisation, database and web server
tuning all have a huge impact in comparison to the things your
considering, and as such should probably be given more weight.

Further, the next obvious steps are to get zend optimizer (which
optimizes the opcodes) then a good opcode cache; and finally cache all
the output you can so that php doesn't even have to come in to the
equation for most "hits".

Then, finally you get down to the bits you're considering for those
extra microseconds and the knowledge that you've done good; whether
it'll make any difference at this point or not is another issue :p

bit of light reading for you:
http://phplens.com/lens/php-book/optimizing-debugging-php.php

regards!

--- End Message ---
--- Begin Message --- I've got a PHP script running on a shared host [Blue Host] that creates a directory and writes files in it.

The directory and files are "owned" by the site name, not "nobody" as I've always seen on other shared hosts.

Anyone have a possible explanation for this?

Thanks, Al.........

--- End Message ---
--- Begin Message ---
On Sun, 2009-12-20 at 12:58 -0500, Al wrote:

> I've got a PHP script running on a shared host [Blue Host] that creates a 
> directory and writes files in it.
> 
> The directory and files are "owned" by the site name, not "nobody" as I've 
> always seen on other shared hosts.
> 
> Anyone have a possible explanation for this?
> 
> Thanks, Al.........
> 

The files, if created by PHP, will belong to the user and group that
your instance of Apache is running under. Some hosts are set up slightly
differently, but it's nothing to worry about.

Thanks,
Ash
http://www.ashleysheridan.co.uk



--- End Message ---
--- Begin Message ---


On 12/20/2009 1:06 PM, Ashley Sheridan wrote:
On Sun, 2009-12-20 at 12:58 -0500, Al wrote:

I've got a PHP script running on a shared host [Blue Host] that creates a
directory and writes files in it.

The directory and files are "owned" by the site name, not "nobody" as I've
always seen on other shared hosts.

Anyone have a possible explanation for this?

Thanks, Al.........


The files, if created by PHP, will belong to the user and group that
your instance of Apache is running under. Some hosts are set up slightly
differently, but it's nothing to worry about.

Thanks,
Ash
http://www.ashleysheridan.co.uk




Thanks for the quick answer.

On purpose, to add a little security, I set the permissions for 644 to limit access by anything other than PHP scripts.

I'd like to put my script above the web space; but it's generally used by folks on shared hosts and some hosts won't provide access there.

I guess I'll add an htaccess file to my application's root that prevents access by anything other than my scrip.

I've also found several cases, on some hosts, where the directories can be indexed. So all the files are exposed. I've set config files, etc. with a php extension to prevent them from easily being read.

Al........


--- End Message ---
--- Begin Message ---
> By attempting to connect you will implicitly query DNS (which itself
> is a connection to server).  

No it's not - it's putting out a packet targeted at an IP address and hoping a 
server will answer - hence why multi-cast works for DNS because you're not 
directly connecting to a specified server, like you do with TCP/IP.  I believe 
it's similar for ping which is why it's used so commonly in monitoring 
applications.

> If you're not online you won't be able to
> resolve the domain name.  

Exactly - so if all the OP wanted to check for was a working Internet 
connection, then DNS is a better way to go IMHO.

> Hence no overhead of actually connecting,
> because that won't even start to happen until the hostname is resolved
> to an IP.  If it happens to resolve from some cache, oh well.  Not
> like its that much overhead.  You're nitpicking over the number of
> packets it takes to SYN/ACK.

Yep and if it's running inside a LAN with x number of computers all doing the 
same thing, that mounts up to a lot of unnecessary traffic - I've seen it.


--- End Message ---
--- Begin Message ---
Hey, Lets assume I got a table named "users".
It contains id & name.

I have another table called "notes" - which contains id, user_id, contents


I want to delete all users from table "users" that don't have notes (SELECT
... FROM notes WHERE user_id=ID) returns empty result.


What is the fastest way to do it?

-- 
Use ROT26 for best security

--- End Message ---
--- Begin Message ---
Sorry for the double-post, forgot to add up the way I thought about using:

Simple sql query:

SELECT * FROM `users` as u  WHERE (SELECT COUNT(id) FROM notes WHERE user_id
= u.id LIMIT 0,1) = 0

Problem is I have about 450,000 "users" and about 90% don't have "notes",
and it takes LOADS of times even with I limit it:

SELECT * FROM `users` as u  WHERE (SELECT COUNT(id) FROM notes WHERE user_id
= u.id LIMIT 0,1) = 0 LIMIT 0,10

Takes about 10 seconds which is too much time... Any way to optimize it?

On Sun, Dec 20, 2009 at 11:30 PM, דניאל דנון <danondan...@gmail.com> wrote:

> Hey, Lets assume I got a table named "users".
> It contains id & name.
>
> I have another table called "notes" - which contains id, user_id, contents
>
>
> I want to delete all users from table "users" that don't have notes (SELECT
> ... FROM notes WHERE user_id=ID) returns empty result.
>
>
> What is the fastest way to do it?
>
> --
> Use ROT26 for best security
>



-- 
Use ROT26 for best security

--- End Message ---
--- Begin Message ---
Hello,

That kind of queries usually run faster using a LEFT JOIN, like this:

select u.id
from users u
     left join notes n on u.id = n.user_id
where n.id is null;

That query will give you the ids of the users without notes. Make sure
to have an index on notes.user_id to let the LEFT JOIN use it and run
faster.

Hope that helps, regards,

Jonathan

On Sun, Dec 20, 2009 at 6:41 PM, דניאל דנון <danondan...@gmail.com> wrote:
> Sorry for the double-post, forgot to add up the way I thought about using:
>
> Simple sql query:
>
> SELECT * FROM `users` as u  WHERE (SELECT COUNT(id) FROM notes WHERE user_id
> = u.id LIMIT 0,1) = 0
>
> Problem is I have about 450,000 "users" and about 90% don't have "notes",
> and it takes LOADS of times even with I limit it:
>
> SELECT * FROM `users` as u  WHERE (SELECT COUNT(id) FROM notes WHERE user_id
> = u.id LIMIT 0,1) = 0 LIMIT 0,10
>
> Takes about 10 seconds which is too much time... Any way to optimize it?
>
> On Sun, Dec 20, 2009 at 11:30 PM, דניאל דנון <danondan...@gmail.com> wrote:
>
>> Hey, Lets assume I got a table named "users".
>> It contains id & name.
>>
>> I have another table called "notes" - which contains id, user_id, contents
>>
>>
>> I want to delete all users from table "users" that don't have notes (SELECT
>> ... FROM notes WHERE user_id=ID) returns empty result.
>>
>>
>> What is the fastest way to do it?
>>
>> --
>> Use ROT26 for best security
>>
>
>
>
> --
> Use ROT26 for best security
>

--- End Message ---
--- Begin Message ---
 

Good Day,

 

                I need help in in validating a form.

                The for is valdated be a javascript frist then if all the
fields are filled in its valaded be PHP.

 

                The Form starts with:

                <form name="myForm" action="<?php echo
$_SERVER['PHP_SELF'];?>" method="post" onsubmit='return formValidator()' >

 

The "formValidator()" goes to a javascript and does display the missing
information in this case BUT then the page gets reloaded and clears all the
javascript error messages and does the PHP validation.

 

The PHP only runs if the fields are set by testing using 'isset".

 

Without puting on numeric lines of go can you suggest things I must have
overlooked. Silly request but there must be something I'm overlooking.    I
have simular code on other programs but this one is casuing me trouble.

 

Thanks every so much..

 

....../Ernie

 

 

 

 

 


--- End Message ---

Reply via email to