php-general Digest 8 Jul 2010 14:43:39 -0000 Issue 6836
Topics (messages 306745 through 306761):
Re: Multiple Access Question
306745 by: Bastien Koert
306746 by: Paul M Foster
306747 by: Robert Cummings
306748 by: Paul M Foster
306749 by: Robert Cummings
306750 by: Tommy Pham
306755 by: Richard Quadling
306758 by: tedd
Re: "php -l" - does it find *anything*?
306751 by: Gary .
Simple XML - problem with errors
306752 by: Gary .
306756 by: Richard Quadling
306759 by: Gary .
306760 by: Marc Guay
306761 by: Gary .
Re: interface name file
306753 by: Ashley Sheridan
306757 by: shiplu
Setting up a XDebug debugging environment for PHP / WAMP / Eclipse PDT
306754 by: David Négrier
Administrivia:
To subscribe to the digest, e-mail:
[email protected]
To unsubscribe from the digest, e-mail:
[email protected]
To post to the list, e-mail:
[email protected]
----------------------------------------------------------------------
--- Begin Message ---
On Wed, Jul 7, 2010 at 8:47 PM, Paul M Foster <[email protected]> wrote:
> On Wed, Jul 07, 2010 at 12:59:30PM -0400, tedd wrote:
>
>> Hi gang:
>>
>> I have *my way* of handling this problem, but I would like to hear
>> how you guys do it.
>>
>> Here's the problem -- let's say you have a database containing names
>> and addresses and you want "approved" users to be able to access the
>> data. As such, a user must login before accessing an editing script
>> that would allow them to review and edit the data -- nothing
>> complicated about that.
>>
>> However, let's say you have more than one user accessing the editing
>> script at the same time and you want to make sure that any changes
>> made to the database are done in the most efficient manner possible.
>>
>> For example, if two users access the database at the same time and
>> are editing different records, then there's no real problem. When
>> each user finishes editing they simply click submit and their changes
>> are recorded in the database. However, if two (or more) users want to
>> access the same record, then how do you handle that?
>
> Use a DBMS? I'm sorry if that seems flippant, but a DBMS handles this by
> queuing the requests, which is one of the advantages of a client-server
> DBMS.
>
> So maybe I don't understand your question.
>
> Paul
>
> --
> Paul M. Foster
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>
@Paul,
The OPs question is about concurrency on the record itself. How to
avoid two users accessing the same record and potentially damaging
each others changes
My approach is the same as Rob's. Flag it locked and let the second
user gets a read only copy
--
Bastien
Cat, the other other white meat
--- End Message ---
--- Begin Message ---
On Wed, Jul 07, 2010 at 10:01:05PM -0400, Bastien Koert wrote:
> On Wed, Jul 7, 2010 at 8:47 PM, Paul M Foster <[email protected]>
> wrote:
> > On Wed, Jul 07, 2010 at 12:59:30PM -0400, tedd wrote:
> >
> >> Hi gang:
> >>
> >> I have *my way* of handling this problem, but I would like to hear
> >> how you guys do it.
> >>
> >> Here's the problem -- let's say you have a database containing names
> >> and addresses and you want "approved" users to be able to access the
> >> data. As such, a user must login before accessing an editing script
> >> that would allow them to review and edit the data -- nothing
> >> complicated about that.
> >>
> >> However, let's say you have more than one user accessing the editing
> >> script at the same time and you want to make sure that any changes
> >> made to the database are done in the most efficient manner possible.
> >>
> >> For example, if two users access the database at the same time and
> >> are editing different records, then there's no real problem. When
> >> each user finishes editing they simply click submit and their changes
> >> are recorded in the database. However, if two (or more) users want to
> >> access the same record, then how do you handle that?
> >
> > Use a DBMS? I'm sorry if that seems flippant, but a DBMS handles this by
> > queuing the requests, which is one of the advantages of a client-server
> > DBMS.
> >
> > So maybe I don't understand your question.
> >
> > Paul
> >
> > --
> > Paul M. Foster
> >
> > --
> > PHP General Mailing List (http://www.php.net/)
> > To unsubscribe, visit: http://www.php.net/unsub.php
> >
> >
>
> @Paul,
>
> The OPs question is about concurrency on the record itself. How to
> avoid two users accessing the same record and potentially damaging
> each others changes
>
> My approach is the same as Rob's. Flag it locked and let the second
> user gets a read only copy
I can't think of a way to do this using MySQL or PostgreSQL. And one of
the biggest issues with the solution you suggest is the user who opens a
record for writing and then goes out for coffee. Everyone's locked out
of the record (for writes) until they come back and finish.
Okay, to solve that, we start a timer. But when the locker's time is up,
how do we let the locker know they're not allowed to store whatever
edits they've made? And how do we fix it so that those locked out are
now unlocked? Plus, they're probably in a queue, so we really only let
one of them know that they can now make edits.
Since this is a PHP list, I assume we're talking about a web interface.
So how do we do all this back end jockeying? Javascript is about the
only way. But every time you fire off one of these javascript dealies,
it has to be on its own timer so that it can let the user know that the
original locker is gone and now the golden ticket is yours. It
essentially has to sleep and ping, sleep and ping. Actually, it's more
like a spinlock. But a spinlock would eat CPU for every user, if it was
running on the server. So it would have to be running on the client, and
"ping" the server every once in a while.
Then you'd have to figure out some kind of messaging infrastrucure for
the DBMS, so that it would quickly answer "pings" without tying up a lot
of CPU cycles. It would have to be something outside the normal query
infrastructure.
When you actually get into this, it's an incredibly complex solution. I
vote instead for allowing edits to be queued, log changes to the
database. If there is a true contention problem, you can look at the
journal and see who made what edits in what order and resolve the
situation.
The best analogy I can think of is when using a DVCS like git, and
trying to merge changes where two people have edited the same area of a
file. Ultimately, git throws up its hands and asks a human to resolve
the situation.
Bottom line: I've heard about concurrency problems since I started using
databases, and I've never heard of a foolproof solution for them that
wasn't incredibly complex. And I don't think I've ever seen a solution
in actual practice.
If I'm wrong, someone show me where it's been viably solved and how.
Paul
--
Paul M. Foster
--- End Message ---
--- Begin Message ---
Paul M Foster wrote:
On Wed, Jul 07, 2010 at 10:01:05PM -0400, Bastien Koert wrote:
On Wed, Jul 7, 2010 at 8:47 PM, Paul M Foster <[email protected]>
wrote:
On Wed, Jul 07, 2010 at 12:59:30PM -0400, tedd wrote:
Hi gang:
I have *my way* of handling this problem, but I would like to hear
how you guys do it.
Here's the problem -- let's say you have a database containing names
and addresses and you want "approved" users to be able to access the
data. As such, a user must login before accessing an editing script
that would allow them to review and edit the data -- nothing
complicated about that.
However, let's say you have more than one user accessing the editing
script at the same time and you want to make sure that any changes
made to the database are done in the most efficient manner possible.
For example, if two users access the database at the same time and
are editing different records, then there's no real problem. When
each user finishes editing they simply click submit and their changes
are recorded in the database. However, if two (or more) users want to
access the same record, then how do you handle that?
Use a DBMS? I'm sorry if that seems flippant, but a DBMS handles this by
queuing the requests, which is one of the advantages of a client-server
DBMS.
So maybe I don't understand your question.
Paul
--
Paul M. Foster
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
@Paul,
The OPs question is about concurrency on the record itself. How to
avoid two users accessing the same record and potentially damaging
each others changes
My approach is the same as Rob's. Flag it locked and let the second
user gets a read only copy
I can't think of a way to do this using MySQL or PostgreSQL. And one of
the biggest issues with the solution you suggest is the user who opens a
record for writing and then goes out for coffee. Everyone's locked out
of the record (for writes) until they come back and finish.
Okay, to solve that, we start a timer. But when the locker's time is up,
how do we let the locker know they're not allowed to store whatever
edits they've made? And how do we fix it so that those locked out are
now unlocked? Plus, they're probably in a queue, so we really only let
one of them know that they can now make edits.
Since this is a PHP list, I assume we're talking about a web interface.
So how do we do all this back end jockeying? Javascript is about the
only way. But every time you fire off one of these javascript dealies,
it has to be on its own timer so that it can let the user know that the
original locker is gone and now the golden ticket is yours. It
essentially has to sleep and ping, sleep and ping. Actually, it's more
like a spinlock. But a spinlock would eat CPU for every user, if it was
running on the server. So it would have to be running on the client, and
"ping" the server every once in a while.
Then you'd have to figure out some kind of messaging infrastrucure for
the DBMS, so that it would quickly answer "pings" without tying up a lot
of CPU cycles. It would have to be something outside the normal query
infrastructure.
When you actually get into this, it's an incredibly complex solution. I
vote instead for allowing edits to be queued, log changes to the
database. If there is a true contention problem, you can look at the
journal and see who made what edits in what order and resolve the
situation.
The best analogy I can think of is when using a DVCS like git, and
trying to merge changes where two people have edited the same area of a
file. Ultimately, git throws up its hands and asks a human to resolve
the situation.
Bottom line: I've heard about concurrency problems since I started using
databases, and I've never heard of a foolproof solution for them that
wasn't incredibly complex. And I don't think I've ever seen a solution
in actual practice.
If I'm wrong, someone show me where it's been viably solved and how.
I think you're overthinking the issue. The timer handles the issue of
holding onto a lock for too long. As for a write queue... don't bother.
If a user finds that another user has a lock then tell them when it
expires. They can come back and try for the lock on their own. You can
set up AJAX polling to see if the lock has been removed and indicate
this to the user (if they've bothered to wait on the page) but this is
optional. Queuing edits is not a good solution. Imagine document X:
UserA requests X
UserB requests X
UserC requests X
UserD requests X
UserA modifies X and saves X.1
UserB modifies X and saves X.2
UserC modifies X and saves X.3
UserD modifies X and saves X.4
In this scenario all the work done by UserA, UserB, and UserC is
clobbered by the submission by UserD. This can be resolved via merging
such as used by versioning systems, but this makes less sense in a high
traffic collaborative content system such as a wiki. In the lock
scenario we have the following:
UserA requests X
UserA modifies X and saves X.1
UserB requests X.1
UserB modifies X.1 and saves X.2
UserC requests X.2
UserC modifies X.2 and saves X.3
UserD requests X.3
UserD modifies X.3 and saves X.4
At each write step the previous work is appropriately integrated. This
is the desired functionality for a collaborative document such as a
Wiki. In the case of source code, once generally expects a much smaller
number of editors or that editors are working on very different areas of
the source file and so conflict resolution is less common due to
automatic merge by the version control system.
Cheers,
Rob.
--- End Message ---
--- Begin Message ---
On Wed, Jul 07, 2010 at 11:28:56PM -0400, Robert Cummings wrote:
> Paul M Foster wrote:
<snip>
>>> @Paul,
>>>
>>> The OPs question is about concurrency on the record itself. How to
>>> avoid two users accessing the same record and potentially damaging
>>> each others changes
>>>
>>> My approach is the same as Rob's. Flag it locked and let the second
>>> user gets a read only copy
>>
>> I can't think of a way to do this using MySQL or PostgreSQL. And one of
>> the biggest issues with the solution you suggest is the user who opens a
>> record for writing and then goes out for coffee. Everyone's locked out
>> of the record (for writes) until they come back and finish.
>>
>> Okay, to solve that, we start a timer. But when the locker's time is up,
>> how do we let the locker know they're not allowed to store whatever
>> edits they've made? And how do we fix it so that those locked out are
>> now unlocked? Plus, they're probably in a queue, so we really only let
>> one of them know that they can now make edits.
>>
>> Since this is a PHP list, I assume we're talking about a web interface.
>> So how do we do all this back end jockeying? Javascript is about the
>> only way. But every time you fire off one of these javascript dealies,
>> it has to be on its own timer so that it can let the user know that the
>> original locker is gone and now the golden ticket is yours. It
>> essentially has to sleep and ping, sleep and ping. Actually, it's more
>> like a spinlock. But a spinlock would eat CPU for every user, if it was
>> running on the server. So it would have to be running on the client, and
>> "ping" the server every once in a while.
>>
>> Then you'd have to figure out some kind of messaging infrastrucure for
>> the DBMS, so that it would quickly answer "pings" without tying up a lot
>> of CPU cycles. It would have to be something outside the normal query
>> infrastructure.
>>
>> When you actually get into this, it's an incredibly complex solution. I
>> vote instead for allowing edits to be queued, log changes to the
>> database. If there is a true contention problem, you can look at the
>> journal and see who made what edits in what order and resolve the
>> situation.
>>
>> The best analogy I can think of is when using a DVCS like git, and
>> trying to merge changes where two people have edited the same area of a
>> file. Ultimately, git throws up its hands and asks a human to resolve
>> the situation.
>>
>> Bottom line: I've heard about concurrency problems since I started using
>> databases, and I've never heard of a foolproof solution for them that
>> wasn't incredibly complex. And I don't think I've ever seen a solution
>> in actual practice.
>>
>> If I'm wrong, someone show me where it's been viably solved and how.
>
> I think you're overthinking the issue. The timer handles the issue of
> holding onto a lock for too long.
That's why I suggested it.
> As for a write queue... don't bother.
> If a user finds that another user has a lock then tell them when it
> expires. They can come back and try for the lock on their own. You can
> set up AJAX polling to see if the lock has been removed and indicate
> this to the user (if they've bothered to wait on the page) but this is
> optional.
That's why I suggested it.
Yes, we could just tell users "come back later" if they wanted to edit a
locked page. I was just imagining a 100% complete wipe-your-butt-for-you
solution.
> Queuing edits is not a good solution.
And yet, it appears to adequate for the DBMSes I'm familiar with.
> Imagine document X:
>
> UserA requests X
> UserB requests X
> UserC requests X
> UserD requests X
>
> UserA modifies X and saves X.1
> UserB modifies X and saves X.2
> UserC modifies X and saves X.3
> UserD modifies X and saves X.4
>
> In this scenario all the work done by UserA, UserB, and UserC is
> clobbered by the submission by UserD. This can be resolved via merging
> such as used by versioning systems,
... if automatic merging can be done in a particular case. But there's a
non-zero probability that a merge will require human intervention. Yes
of course, without version/merging or some type of write-locks, there is
potential contention.
> but this makes less sense in a high
> traffic collaborative content system such as a wiki. In the lock
> scenario we have the following:
>
> UserA requests X
> UserA modifies X and saves X.1
>
> UserB requests X.1
> UserB modifies X.1 and saves X.2
>
> UserC requests X.2
> UserC modifies X.2 and saves X.3
>
> UserD requests X.3
> UserD modifies X.3 and saves X.4
... assuming UserB waits until UserA stores his edits, UserC waits until
UserB stores his edits, etc. The above assumes locking, and probably
versioning and merging.
But a wiki is not a DBMS. And perhaps the OP was talking about a wiki.
In which case, all this may be moot. I just checked, and Wikipedia does
not lock pages under edit. They do versioning, but their "locking" is on
the honor system. For a discourteous user, this would allow contention.
I don't know if other wikis perform locking. I doubt it, but I could be
wrong. (Note: Wikipedia *will* lock a page from *all* edits when there
is continued controversy about a given article.)
>
> At each write step the previous work is appropriately integrated. This
> is the desired functionality for a collaborative document such as a
> Wiki. In the case of source code, once generally expects a much smaller
> number of editors or that editors are working on very different areas of
> the source file and so conflict resolution is less common due to
> automatic merge by the version control system.
Agreed.
Again, though, I'd like to see a *working* example of the above,
particularly in the context of a DBMS.
Back in my FoxPro days, we used semaphores on each record, but it was a
very complicated system to program with. For add-on libraries (like
CodeBase) which accessed xBase files, they used OS-based file locking.
Again, clumsy and error-prone. (This was one of the perpetual problems
for a database system which was originally built as a single-user
system. SQLite has similar problems. It write-locks, but such locks
aren't reliable under either Windows or NFS environments, according to
the documentation.)
There's another subtle point about DBMSes. Doing a SELECT over a
table(s) doesn't indicate to the DBMS that a write will occur later on
that same data. In fact, writes may occur on different fields in the
same record "concurrently" without issue. Contention is really only a
problem when two users try to edit the same *field* at the same time.
And as far as I know, DBMSes like PostgreSQL and MySQL simply queue
writes, allowing the kind of contention you're talking about. PostgreSQL
replaces the *whole* record with updated data upon any writes to it.
(Actually, they mark the old record for deletion and *add* an updated
record.) I don't know about MySQL.
But maybe I'm overthinking it. ;-}
Paul
--
Paul M. Foster
--- End Message ---
--- Begin Message ---
Paul M Foster wrote:
On Wed, Jul 07, 2010 at 11:28:56PM -0400, Robert Cummings wrote:
Paul M Foster wrote:
<snip>
@Paul,
The OPs question is about concurrency on the record itself. How to
avoid two users accessing the same record and potentially damaging
each others changes
My approach is the same as Rob's. Flag it locked and let the second
user gets a read only copy
I can't think of a way to do this using MySQL or PostgreSQL. And one of
the biggest issues with the solution you suggest is the user who opens a
record for writing and then goes out for coffee. Everyone's locked out
of the record (for writes) until they come back and finish.
Okay, to solve that, we start a timer. But when the locker's time is up,
how do we let the locker know they're not allowed to store whatever
edits they've made? And how do we fix it so that those locked out are
now unlocked? Plus, they're probably in a queue, so we really only let
one of them know that they can now make edits.
Since this is a PHP list, I assume we're talking about a web interface.
So how do we do all this back end jockeying? Javascript is about the
only way. But every time you fire off one of these javascript dealies,
it has to be on its own timer so that it can let the user know that the
original locker is gone and now the golden ticket is yours. It
essentially has to sleep and ping, sleep and ping. Actually, it's more
like a spinlock. But a spinlock would eat CPU for every user, if it was
running on the server. So it would have to be running on the client, and
"ping" the server every once in a while.
Then you'd have to figure out some kind of messaging infrastrucure for
the DBMS, so that it would quickly answer "pings" without tying up a lot
of CPU cycles. It would have to be something outside the normal query
infrastructure.
When you actually get into this, it's an incredibly complex solution. I
vote instead for allowing edits to be queued, log changes to the
database. If there is a true contention problem, you can look at the
journal and see who made what edits in what order and resolve the
situation.
The best analogy I can think of is when using a DVCS like git, and
trying to merge changes where two people have edited the same area of a
file. Ultimately, git throws up its hands and asks a human to resolve
the situation.
Bottom line: I've heard about concurrency problems since I started using
databases, and I've never heard of a foolproof solution for them that
wasn't incredibly complex. And I don't think I've ever seen a solution
in actual practice.
If I'm wrong, someone show me where it's been viably solved and how.
I think you're overthinking the issue. The timer handles the issue of
holding onto a lock for too long.
That's why I suggested it.
As for a write queue... don't bother.
If a user finds that another user has a lock then tell them when it
expires. They can come back and try for the lock on their own. You can
set up AJAX polling to see if the lock has been removed and indicate
this to the user (if they've bothered to wait on the page) but this is
optional.
That's why I suggested it.
Yes, we could just tell users "come back later" if they wanted to edit a
locked page. I was just imagining a 100% complete wipe-your-butt-for-you
solution.
Queuing edits is not a good solution.
And yet, it appears to adequate for the DBMSes I'm familiar with.
Imagine document X:
UserA requests X
UserB requests X
UserC requests X
UserD requests X
UserA modifies X and saves X.1
UserB modifies X and saves X.2
UserC modifies X and saves X.3
UserD modifies X and saves X.4
In this scenario all the work done by UserA, UserB, and UserC is
clobbered by the submission by UserD. This can be resolved via merging
such as used by versioning systems,
... if automatic merging can be done in a particular case. But there's a
non-zero probability that a merge will require human intervention. Yes
of course, without version/merging or some type of write-locks, there is
potential contention.
but this makes less sense in a high
traffic collaborative content system such as a wiki. In the lock
scenario we have the following:
UserA requests X
UserA modifies X and saves X.1
UserB requests X.1
UserB modifies X.1 and saves X.2
UserC requests X.2
UserC modifies X.2 and saves X.3
UserD requests X.3
UserD modifies X.3 and saves X.4
... assuming UserB waits until UserA stores his edits, UserC waits until
UserB stores his edits, etc. The above assumes locking, and probably
versioning and merging.
No, not at all. Each user can only edit the version last saved. It only
assumes locking.
But a wiki is not a DBMS. And perhaps the OP was talking about a wiki.
In which case, all this may be moot. I just checked, and Wikipedia does
not lock pages under edit. They do versioning, but their "locking" is on
the honor system. For a discourteous user, this would allow contention.
I don't know if other wikis perform locking. I doubt it, but I could be
wrong. (Note: Wikipedia *will* lock a page from *all* edits when there
is continued controversy about a given article.)
I knew I should have double checked Wikipedia before I wrote that :)
Indeed, Wikipedia does use an automatic merge.
At each write step the previous work is appropriately integrated. This
is the desired functionality for a collaborative document such as a
Wiki. In the case of source code, once generally expects a much smaller
number of editors or that editors are working on very different areas of
the source file and so conflict resolution is less common due to
automatic merge by the version control system.
Agreed.
Again, though, I'd like to see a *working* example of the above,
particularly in the context of a DBMS.
I'm not sure what context the OP had intended, for whatever reason I
assumed content editing. I agree, this wouldn't work well at all in a
DBMS since there would need to be multiple locks in place on all data
points affected by a modification.
Back in my FoxPro days, we used semaphores on each record, but it was a
very complicated system to program with. For add-on libraries (like
CodeBase) which accessed xBase files, they used OS-based file locking.
Again, clumsy and error-prone. (This was one of the perpetual problems
for a database system which was originally built as a single-user
system. SQLite has similar problems. It write-locks, but such locks
aren't reliable under either Windows or NFS environments, according to
the documentation.)
Directory creation is atomic on any modern OS I can think of and I
believe the atomicity is preserved over NFS. As such, you can use a
directory to facilitate locks over NFS.
There's another subtle point about DBMSes. Doing a SELECT over a
table(s) doesn't indicate to the DBMS that a write will occur later on
that same data. In fact, writes may occur on different fields in the
same record "concurrently" without issue. Contention is really only a
problem when two users try to edit the same *field* at the same time.
And as far as I know, DBMSes like PostgreSQL and MySQL simply queue
writes, allowing the kind of contention you're talking about. PostgreSQL
replaces the *whole* record with updated data upon any writes to it.
(Actually, they mark the old record for deletion and *add* an updated
record.) I don't know about MySQL.
But maybe I'm overthinking it. ;-}
Well when considering DBMSes you are right. The updates are queued (if
the client isn't awaiting a return code) and they are applied one after
the other clobbering as necessary :) One would assume before the data
goes to the DBMS that it had been appropriately handled with respect to
contextual contention resolution.
Cheers,
Rob.
--- End Message ---
--- Begin Message ---
-----Original Message-----
> From: Robert Cummings [mailto:[email protected]]
> Sent: Wednesday, July 07, 2010 10:28 PM
> To: Paul M Foster
> Cc: [email protected]
> Subject: Re: [PHP] Multiple Access Question
>
> Paul M Foster wrote:
> > On Wed, Jul 07, 2010 at 11:28:56PM -0400, Robert Cummings wrote:
> >
> >> Paul M Foster wrote:
> >
> > <snip>
> >
> >>>> @Paul,
> >>>>
> >>>> The OPs question is about concurrency on the record itself. How to
> >>>> avoid two users accessing the same record and potentially damaging
> >>>> each others changes
> >>>>
> >>>> My approach is the same as Rob's. Flag it locked and let the second
> >>>> user gets a read only copy
> >>> I can't think of a way to do this using MySQL or PostgreSQL. And one
> >>> of the biggest issues with the solution you suggest is the user who
> >>> opens a record for writing and then goes out for coffee. Everyone's
> >>> locked out of the record (for writes) until they come back and finish.
> >>>
> >>> Okay, to solve that, we start a timer. But when the locker's time is
> >>> up, how do we let the locker know they're not allowed to store
> >>> whatever edits they've made? And how do we fix it so that those
> >>> locked out are now unlocked? Plus, they're probably in a queue, so
> >>> we really only let one of them know that they can now make edits.
> >>>
> >>> Since this is a PHP list, I assume we're talking about a web interface.
> >>> So how do we do all this back end jockeying? Javascript is about the
> >>> only way. But every time you fire off one of these javascript
> >>> dealies, it has to be on its own timer so that it can let the user
> >>> know that the original locker is gone and now the golden ticket is
> >>> yours. It essentially has to sleep and ping, sleep and ping.
> >>> Actually, it's more like a spinlock. But a spinlock would eat CPU
> >>> for every user, if it was running on the server. So it would have to
> >>> be running on the client, and "ping" the server every once in a while.
> >>>
> >>> Then you'd have to figure out some kind of messaging infrastrucure
> >>> for the DBMS, so that it would quickly answer "pings" without tying
> >>> up a lot of CPU cycles. It would have to be something outside the
> >>> normal query infrastructure.
> >>>
> >>> When you actually get into this, it's an incredibly complex
> >>> solution. I vote instead for allowing edits to be queued, log
> >>> changes to the database. If there is a true contention problem, you
> >>> can look at the journal and see who made what edits in what order
> >>> and resolve the situation.
> >>>
> >>> The best analogy I can think of is when using a DVCS like git, and
> >>> trying to merge changes where two people have edited the same area
> >>> of a file. Ultimately, git throws up its hands and asks a human to
> >>> resolve the situation.
> >>>
> >>> Bottom line: I've heard about concurrency problems since I started
> >>> using databases, and I've never heard of a foolproof solution for
> >>> them that wasn't incredibly complex. And I don't think I've ever
> >>> seen a solution in actual practice.
> >>>
> >>> If I'm wrong, someone show me where it's been viably solved and how.
> >> I think you're overthinking the issue. The timer handles the issue of
> >> holding onto a lock for too long.
> >
> > That's why I suggested it.
> >
> >> As for a write queue... don't bother.
> >> If a user finds that another user has a lock then tell them when it
> >> expires. They can come back and try for the lock on their own. You
> >> can set up AJAX polling to see if the lock has been removed and
> >> indicate this to the user (if they've bothered to wait on the page)
> >> but this is optional.
> >
> > That's why I suggested it.
> >
> > Yes, we could just tell users "come back later" if they wanted to edit
> > a locked page. I was just imagining a 100% complete
> > wipe-your-butt-for-you solution.
> >
> >> Queuing edits is not a good solution.
> >
> > And yet, it appears to adequate for the DBMSes I'm familiar with.
> >
> >> Imagine document X:
> >>
> >> UserA requests X
> >> UserB requests X
> >> UserC requests X
> >> UserD requests X
> >>
> >> UserA modifies X and saves X.1
> >> UserB modifies X and saves X.2
> >> UserC modifies X and saves X.3
> >> UserD modifies X and saves X.4
> >>
> >> In this scenario all the work done by UserA, UserB, and UserC is
> >> clobbered by the submission by UserD. This can be resolved via
> >> merging such as used by versioning systems,
> >
> > ... if automatic merging can be done in a particular case. But there's
> > a non-zero probability that a merge will require human intervention.
> > Yes of course, without version/merging or some type of write-locks,
> > there is potential contention.
> >
> >> but this makes less sense in a high
> >> traffic collaborative content system such as a wiki. In the lock
> >> scenario we have the following:
> >>
> >> UserA requests X
> >> UserA modifies X and saves X.1
> >>
> >> UserB requests X.1
> >> UserB modifies X.1 and saves X.2
> >>
> >> UserC requests X.2
> >> UserC modifies X.2 and saves X.3
> >>
> >> UserD requests X.3
> >> UserD modifies X.3 and saves X.4
> >
> > ... assuming UserB waits until UserA stores his edits, UserC waits
> > until UserB stores his edits, etc. The above assumes locking, and
> > probably versioning and merging.
>
> No, not at all. Each user can only edit the version last saved. It only
> assumes
> locking.
>
> > But a wiki is not a DBMS. And perhaps the OP was talking about a wiki.
> > In which case, all this may be moot. I just checked, and Wikipedia
> > does not lock pages under edit. They do versioning, but their
> > "locking" is on the honor system. For a discourteous user, this would allow
> contention.
> > I don't know if other wikis perform locking. I doubt it, but I could
> > be wrong. (Note: Wikipedia *will* lock a page from *all* edits when
> > there is continued controversy about a given article.)
>
> I knew I should have double checked Wikipedia before I wrote that :) Indeed,
> Wikipedia does use an automatic merge.
>
> >> At each write step the previous work is appropriately integrated.
> >> This is the desired functionality for a collaborative document such
> >> as a Wiki. In the case of source code, once generally expects a much
> >> smaller number of editors or that editors are working on very
> >> different areas of the source file and so conflict resolution is less
> >> common due to automatic merge by the version control system.
> >
> > Agreed.
> >
> > Again, though, I'd like to see a *working* example of the above,
> > particularly in the context of a DBMS.
>
> I'm not sure what context the OP had intended, for whatever reason I
> assumed content editing. I agree, this wouldn't work well at all in a DBMS
> since there would need to be multiple locks in place on all data points
> affected by a modification.
>
> > Back in my FoxPro days, we used semaphores on each record, but it was
> > a very complicated system to program with. For add-on libraries (like
> > CodeBase) which accessed xBase files, they used OS-based file locking.
> > Again, clumsy and error-prone. (This was one of the perpetual problems
> > for a database system which was originally built as a single-user
> > system. SQLite has similar problems. It write-locks, but such locks
> > aren't reliable under either Windows or NFS environments, according to
> > the documentation.)
>
> Directory creation is atomic on any modern OS I can think of and I believe the
> atomicity is preserved over NFS. As such, you can use a directory to
> facilitate
> locks over NFS.
>
> > There's another subtle point about DBMSes. Doing a SELECT over a
> > table(s) doesn't indicate to the DBMS that a write will occur later on
> > that same data. In fact, writes may occur on different fields in the
> > same record "concurrently" without issue. Contention is really only a
> > problem when two users try to edit the same *field* at the same time.
> > And as far as I know, DBMSes like PostgreSQL and MySQL simply queue
> > writes, allowing the kind of contention you're talking about.
> > PostgreSQL replaces the *whole* record with updated data upon any
> writes to it.
> > (Actually, they mark the old record for deletion and *add* an updated
> > record.) I don't know about MySQL.
> >
> > But maybe I'm overthinking it. ;-}
>
> Well when considering DBMSes you are right. The updates are queued (if the
> client isn't awaiting a return code) and they are applied one after the other
> clobbering as necessary :) One would assume before the data goes to the
> DBMS that it had been appropriately handled with respect to contextual
> contention resolution.
>
> Cheers,
> Rob.
>
> --
Any modern decent RDBMS should be able to do row locking. IE, only 1 update to
that row is possible. But given the OP's question, I'd say this would be his
scenario. John & Jane coincidentally look at the same record. But John gets
distracted for a second while Jane updates the same record. When John goes to
update it, it may override the already updated record made by Jane. I think
the OP wants to know if there's a way for John to know if there's been changes
to data since last seen as in this case. You can implement something similar
to ASP.NET's 'optimistic concurrency update'. Meaning you'll have to cached
the original data values somewhere. Compare the 'keys' (not just PK, but some
other important values/indexes too), if the values are exactly the same, then
continue with the update. If the data has changed, give a prompt to John, in
this case, that data has changed since last viewed and show the changed data
and possibly made by whom and probably give John the choice of override that
same data with his update. IMHO, this would be ideal for many situations
including where the users are in different remote locations. :)
Regards,
Tommy
--- End Message ---
--- Begin Message ---
On 7 July 2010 17:59, tedd <[email protected]> wrote:
> Hi gang:
>
> I have *my way* of handling this problem, but I would like to hear how you
> guys do it.
>
> Here's the problem -- let's say you have a database containing names and
> addresses and you want "approved" users to be able to access the data. As
> such, a user must login before accessing an editing script that would allow
> them to review and edit the data -- nothing complicated about that.
>
> However, let's say you have more than one user accessing the editing script
> at the same time and you want to make sure that any changes made to the
> database are done in the most efficient manner possible.
>
> For example, if two users access the database at the same time and are
> editing different records, then there's no real problem. When each user
> finishes editing they simply click submit and their changes are recorded in
> the database. However, if two (or more) users want to access the same
> record, then how do you handle that?
>
> Cheers,
>
> tedd
I've developed many multi-user applications (specialising in
Accounting and EPOS systems).
Primarily for DOS and Windows and using SQL Server and D-ISAM data
storage systems.
In all instances, multi-user locking is handled through the use of a
semaphore at the row level. We did look at semaphore locking at the
field level.
For DOS and Windows, the necessity for handling timeouts was reduced
by the fact that if the user was logged in to the app, then the lock
stays - and much to the annoyance of all other users. A simple process
to allow locks for a user to be dropped was required to handle the
instances were the user powered off/crashed/forgot password over lunch
situations. In real terms, the number of collisions was very low. Few
users would actually be entering data into the same record at the same
time. In the vast majority, all the users are in the same room or, at
least, the same building/office.
When I started the web versions of these apps, the semaphore locking
was still required, but was extended to include a timeout based upon
the session timeout.
We did realize that by using web based technology, some of our clients
were looking to allow their users to work from home. So informing the
user that a row was locked, how long it was locked for and who else
wanted it was well worth while.
So we introduced a new structure of lock requests. This was a simple
table containing the id of the current semaphore (all our semaphores
are in a single table), the user id of the person wanting the row and
when they requested the lock.
A user who is placed in the lock requests loop could reject their
request (i.e. come back later sort of thing).
The lock requests would be displayed to the lock holder so that they
could essentially get the message to hurry up - in the office someone
would call out "Is anyone in so-and-so?" and then someone would reply
"Oh yes! Sorry, just coming out now." sort of thing. This process
worked fine. The clients knew that only 1 edit could take place at a
time.
With a distributed work force, the on screen visualisation worked
well. A small little drop down in the corner of the app. Nice and
easy.
As the server is more or less constantly being told of the presence of
a user (say once every 30 seconds), the server can easily detect a
dropped user and therefore undo the lock.
The user next in the lock requests table would automatically get the
lock and be informed via the ping that they had control of the row
(disabled inputs would become enabled, etc.). Their entry in the lock
requests table would be deleted and anyone else would see that they
have moved up the queue.
I hope this is of some use to you. Semaphores worked really well for
our apps. The transition from DOS/Windows to the web wasn't as easy as
it could have been - primarily due to the statelessness of the
requests. We did manage to do it without the need for any additional
monitors running on the server. All based upon simply tracking the
datetime of the lock requests, the table lock timeout and the session
timeout.
In all instances, if a lock has to be cleared due to a timeout, all
the unwritten data (we used AJAX to send edits to the server for
holding) would be held, but the lock removed and the user forced to
login again.
On the old system, a lock could persist and cause everyone to wait. On
the web, a lock could NOT persist and the user creating the lock would
suffer the punishment of having to relogin, retrieve their edits
(which may now be stale) and to try again.
Whatever technique you use, I would recommend getting it as close to
the database as possible. The use of stored procedures, for us, was
essential. Lock evaluation and enforcement was all done within the SQL
server - all apps have to use the stored procedures and this makes the
apps a lot simpler. And allows any app to use the same locking
techniques irrespective of the language the developer wanted to use -
our apps allowed for third party additions to the database, but no
access to our code.
I hope this is of some sense to you. It certainly is an interesting topic.
Regards,
Richard.
--- End Message ---
--- Begin Message ---
At 11:48 AM +0100 7/8/10, Richard Quadling wrote:
On 7 July 2010 17:59, tedd <[email protected]> wrote:
Hi gang:
I have *my way* of handling this problem, but I would like to hear how you
> guys do it.
-snip-
I hope this is of some sense to you. It certainly is an interesting topic.
Richard.
Richard:
Yes, it was very informative and useful -- it gave me an idea to try out.
Thanks very much for your most extensive answer.
Cheers,
tedd
--
-------
http://sperling.com http://ancientstones.com http://earthstones.com
--- End Message ---
--- Begin Message ---
On 7/6/10, Per Jessen wrote:
> It really is _only_ the syntax. Same goes for e.g. the C lint -
Sorry, but that's not what I remember from my C days, nor what wikipedia
says "lint was the name originally given to a particular program that
flagged some suspicious and non-portable constructs (likely to be bugs)
in C language source code." In fact, what would be the point of a C lint
that does that? It's already done by the parsing/syntactical analysis
part of the compiler, there'd be no point writing a separate program
(lint) to do that.
Anyway, yeah, I accept that that (syntax checking only) is what php -l
does, even if I think it's wrong, or at least incorrectly described :)
> --
> Per Jessen, Zürich (24.2°C)
It feels warmer than that today. Maybe it's because my code isn't
working X(
--- End Message ---
--- Begin Message ---
Why am I still getting an exception when I do this:
libxml_use_internal_errors(true);
$this->xml = new SimpleXMLElement($this->htmlString);
or this
$this->xml = new SimpleXMLElement($this->htmlString,
LIBXML_NOERROR|LIBXML_NOWARNING);
?
The exception says "Exception: String could not be parsed as XML". Not
a hint of why not, of course.
I thought the point of those things was to just stuff the content in,
and let user code handle errors? I mean, I *know* the provided HTML is
broken. I also know there's not a chance in hell of it ever being
fixed (completely out of my control).
And yes, I'd rather use DOM, but I can't.
--- End Message ---
--- Begin Message ---
On 8 July 2010 08:07, Gary . <[email protected]> wrote:
> Why am I still getting an exception when I do this:
>
> libxml_use_internal_errors(true);
> $this->xml = new SimpleXMLElement($this->htmlString);
>
> or this
> $this->xml = new SimpleXMLElement($this->htmlString,
> LIBXML_NOERROR|LIBXML_NOWARNING);
>
> ?
>
> The exception says "Exception: String could not be parsed as XML". Not
> a hint of why not, of course.
>
> I thought the point of those things was to just stuff the content in,
> and let user code handle errors? I mean, I *know* the provided HTML is
> broken. I also know there's not a chance in hell of it ever being
> fixed (completely out of my control).
>
> And yes, I'd rather use DOM, but I can't.
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>
The XML needs to be "well formed" [1]. So, if it is junk, you can't
read it using SimpleXML as the XML is not well formed.
Try putting it through Tidy first - that is, tidy the file first.
Regards,
Richard.
[1] http://www.devx.com/projectcool/Article/19944/0/page/3
--- End Message ---
--- Begin Message ---
Richard Quadling writes:
> On 8 July 2010 08:07, Gary wrote:
>> Why am I still getting an exception when I do this:
>>
>> libxml_use_internal_errors(true);
>> $this->xml = new SimpleXMLElement($this->htmlString);
>>
>> or this
>> $this->xml = new SimpleXMLElement($this->htmlString,
>> LIBXML_NOERROR|LIBXML_NOWARNING);
>>
>> ?
>>
>> The exception says "Exception: String could not be parsed as XML".
...
> The XML needs to be "well formed" [1].
I thought so, thanks. What does libxml_use_internal_errors do then, if
it doesn't allow me to handle those problems in my own code?
> So, if it is junk, you can't
> read it using SimpleXML as the XML is not well formed.
I'm trying to just use xml_parse and so on now.
This problem really should be *so* easy. In fact I've already solved it once X(
> Try putting it through Tidy first - that is, tidy the file first.
Ha ha!
Sorry.
It's almost certainly not available. I don't want to talk about it
*cries*
--- End Message ---
--- Begin Message ---
> libxml_use_internal_errors(true);
> $this->xml = new SimpleXMLElement($this->htmlString);
Hi Gary,
I have code that looks like this:
libxml_use_internal_errors(true);
$xml = simplexml_load_string($val);
$errors = libxml_get_errors();
if ($errors)
do this
else
do that
which works fine. Not sure if that's helpful to you, but it seems
like it might.
Marc
--- End Message ---
--- Begin Message ---
Marc Guay writes:
>> libxml_use_internal_errors(true);
>> $this->xml = new SimpleXMLElement($this->htmlString);
> I have code that looks like this:
>
> libxml_use_internal_errors(true);
> $xml = simplexml_load_string($val);
Yeah. I tried simplexml_load_string and found that "worked" (in that it
didn't cause an exception - there are errors which caused the conversion
not to work). I wonder what the difference is between doing "new
SimpleXMLElement" and calling simplexml_load_string which results in the
libxml_use_internal_errors call being ineffective. Odd.
--- End Message ---
--- Begin Message ---
On Thu, 2010-07-08 at 04:00 +0200, Hans Åhlin wrote:
> *.api
> Application Programming Interface
> http://en.wikipedia.org/wiki/Application_Programming_Interface
>
> **********************************************
> Hans Åhlin
> Tel: +46761488019
> icq: 275232967
> http://www.kronan-net.com/
> irc://irc.freenode.net:6667 - TheCoin
> **********************************************
>
>
>
> 2010/7/8 Augusto Flavio <[email protected]>:
> > Hi,
> >
> >
> > I want to know which file name is appropriate for a interface. Today, for a
> > PHP class I use: NAME.class.php. What about a interface? Is there a
> > definition about it ?
> >
> >
> >
> >
> >
> > Thanks
> >
> >
> >
> > Augusto Morais
> >
>
In that case, *.api.php. You shouldn't ever give PHP a non-PHP
extension, as all it takes is for a little mis-configuration and your
PHP code is open to the world to view. Easier to keep a consistent
filename in your app than guarantee someone doesn't make a mistake
setting up the server.
Thanks,
Ash
http://www.ashleysheridan.co.uk
--- End Message ---
--- Begin Message ---
I use naming convention for interface.
If an object can be cached, I create an interface I+Cache+able = ICachable.
So a sample class would be named as ASampleClass.php
And the Interface would be ICachable.php
This is a well known interface naming convention.
Shiplu Mokadd.im
My talks, http://talk.cmyweb.net
Follow me, http://twitter.com/shiplu
SUST Programmers, http://groups.google.com/group/p2psust
Innovation distinguishes bet ... ... (ask Steve Jobs the rest)
--- End Message ---
--- Begin Message ---
Hi list,
After struggling a bit to set-up a debug environment in PHP, I decided
to write a complete tutorial to explain how to set-up XDebug in a WAMP /
Eclipse PDT environment.
Everything was already available on the web but was squattered on
several different sites.
I've tried to write a simple and concise article, and I hope it will
benefit the community. It is so confortable to use a break-point in PHP
instead of debugging with var_dump!
Here is the link to the article:
http://blog.thecodingmachine.com/content/setting-xdebug-debugging-environment-php-wamp-eclipse-pdt
Enjoy,
David.
--- End Message ---