Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-11 Thread Durumdara
Dear All!

2016-04-10 18:03 GMT+02:00 Adrian Klaver :

> On 04/10/2016 06:29 AM, durumd...@gmail.com wrote:
>
>>
>> Products, offers are edited for long time.
>>
>
> Define long time, a session, a day, days, etc?
>

For 1 minute to 1-2 hours.


>
> But we must save subdata. This is not a "word like document" which can
>> saved at once, in a transaction.
>> When a product edited, we must protect it from other user's edit.
>> But it's subdata must be posted/commited to the DB, for example
>> shipping, article quantity changes, vouchers, etc.
>>
>
> So folks can make changes to the attributes of a Product, Offer, etc while
> it is being changed in ways they can not see?
>

They are not linked closely. Other people also could give new transport
areas. And they must read them.
And no, Product and Offer elements don't editable for more people at once.


>
> Or do they get a read only view that changes as the 'locking' user makes
> edits?
>

Yes, and they can add new lines.


>
>
>>
> To be clear you are trying to come up with a solution that allows your
> application to run against different databases(Firebird, SQL Server,
> Postgres, etc?), using a single code base, correct?


Yes, absulately right.

We have 3 choices:
1.) A second connection with locking transaction (we lock other resources)
2.) An advisory locks.
3.) A multi-DB solution - if we can identify the current sessions in 99.99%
(except summer/winter or other hour problem and perfect similarity of
timestamps).

Thanks, we can close this thread... :-)

Regards


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-10 Thread Adrian Klaver

On 04/10/2016 06:29 AM, durumd...@gmail.com wrote:


Dear Alban!


2016.04.10. 13:05 keltezéssel, Alban Hertroys írta:

On 10 Apr 2016, at 9:07, Durumdara  wrote:

Dear Adrian!

Again. As I see the beginning blocks are removed by mailing system in
the code.

We have an "ourlocks" table which hold records (TableName, RecordID,
SessionInnerID, TimeStamp, etc, with TableName/RecordID prikey).

If anybody wants to lock record "for long time", "over the
transactions" it try to insert a new record here.

Why are those records being locked? Reading on, it seems like you're
trying to solve a fairly standard concurrency problem. Any RDBMS worth
their salt can handle that for you, you don't need to manually do any
of that.


This is not real locks. They are logical locks.
Products, offers are edited for long time.


Define long time, a session, a day, days, etc?


But we must save subdata. This is not a "word like document" which can
saved at once, in a transaction.
When a product edited, we must protect it from other user's edit.
But it's subdata must be posted/commited to the DB, for example
shipping, article quantity changes, vouchers, etc.


So folks can make changes to the attributes of a Product, Offer, etc 
while it is being changed in ways they can not see?


Or do they get a read only view that changes as the 'locking' user makes 
edits?








This sounds much more like a use-case for sub-transactions and select
for update (which puts a temporary RDBMS-controlled lock on the
relevant records) than for manual locking.


Yes, this is like sub-transaction.
But for only sub-data. The main data must be edited by only the first
user who started the edit.
This is a long time "lock" like thing. This what we simulate here.

Thanks for your suggestions. I will check this in our client library.


To be clear you are trying to come up with a solution that allows your 
application to run against different databases(Firebird, SQL Server, 
Postgres, etc?), using a single code base, correct?




dd





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-10 Thread David G. Johnston
On Sun, Apr 10, 2016 at 6:29 AM, durumd...@gmail.com 
wrote:

>
> Dear Alban!
>
>
> 2016.04.10. 13:05 keltezéssel, Alban Hertroys írta:
>
>> On 10 Apr 2016, at 9:07, Durumdara  wrote:
>>>
>>> Dear Adrian!
>>>
>>> Again. As I see the beginning blocks are removed by mailing system in
>>> the code.
>>>
>>> We have an "ourlocks" table which hold records (TableName, RecordID,
>>> SessionInnerID, TimeStamp, etc, with TableName/RecordID prikey).
>>>
>>> If anybody wants to lock record "for long time", "over the transactions"
>>> it try to insert a new record here.
>>>
>> Why are those records being locked? Reading on, it seems like you're
>> trying to solve a fairly standard concurrency problem. Any RDBMS worth
>> their salt can handle that for you, you don't need to manually do any of
>> that.
>>
>
> This is not real locks. They are logical locks.
> Products, offers are edited for long time.
> But we must save subdata. This is not a "word like document" which can
> saved at once, in a transaction.
> When a product edited, we must protect it from other user's edit.
> But it's subdata must be posted/commited to the DB, for example shipping,
> article quantity changes, vouchers, etc.
>
>
>
>
>> This sounds much more like a use-case for sub-transactions and select for
>> update (which puts a temporary RDBMS-controlled lock on the relevant
>> records) than for manual locking.
>>
>
> Yes, this is like sub-transaction.
> But for only sub-data. The main data must be edited by only the first user
> who started the edit.
> This is a long time "lock" like thing. This what we simulate here.
>
> Thanks for your suggestions. I will check this in our client library.
>

​I get what you are trying to do, and was trying to figure out a way to
make "FOR UPDATE" and "SKIP LOCKED" or "NOWAIT" work in concert to solve
the problem.  I'm not familiar enough with the problem to have come up with
a viable solution.  But I kept coming back to the theory that maybe making
the database solve the "long running transaction" problem isn't the best
idea.  I'd tend to gravitate toward having a dedicated "librarian"
application who is responsible accepting checkout (select) requests and
processing returns (updates) over the relevant data.

PostgreSQL doesn't make doing pessimistic concurrency via long-running
transactions that fun to implement - namely because it is not very scalable
and can wreck havoc on the system if done improperly as many of the
maintenance routines can be prevented from doing their work.  That said its
likely there is some way to accomplish your goal.

In fact, I just remembered that we implemented "advisory locks" for just
that reason.

http://www.postgresql.org/docs/current/static/explicit-locking.html

David J.


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-10 Thread durumd...@gmail.com


Dear Alban!


2016.04.10. 13:05 keltezéssel, Alban Hertroys írta:

On 10 Apr 2016, at 9:07, Durumdara  wrote:

Dear Adrian!

Again. As I see the beginning blocks are removed by mailing system in the code.

We have an "ourlocks" table which hold records (TableName, RecordID, 
SessionInnerID, TimeStamp, etc, with TableName/RecordID prikey).

If anybody wants to lock record "for long time", "over the transactions" it try 
to insert a new record here.

Why are those records being locked? Reading on, it seems like you're trying to 
solve a fairly standard concurrency problem. Any RDBMS worth their salt can 
handle that for you, you don't need to manually do any of that.


This is not real locks. They are logical locks.
Products, offers are edited for long time.
But we must save subdata. This is not a "word like document" which can 
saved at once, in a transaction.

When a product edited, we must protect it from other user's edit.
But it's subdata must be posted/commited to the DB, for example 
shipping, article quantity changes, vouchers, etc.






This sounds much more like a use-case for sub-transactions and select for 
update (which puts a temporary RDBMS-controlled lock on the relevant records) 
than for manual locking.


Yes, this is like sub-transaction.
But for only sub-data. The main data must be edited by only the first 
user who started the edit.

This is a long time "lock" like thing. This what we simulate here.

Thanks for your suggestions. I will check this in our client library.

dd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-10 Thread Alban Hertroys

> On 10 Apr 2016, at 9:07, Durumdara  wrote:
> 
> Dear Adrian!
> 
> Again. As I see the beginning blocks are removed by mailing system in the 
> code.
> 
> We have an "ourlocks" table which hold records (TableName, RecordID, 
> SessionInnerID, TimeStamp, etc, with TableName/RecordID prikey).
> 
> If anybody wants to lock record "for long time", "over the transactions" it 
> try to insert a new record here.

Why are those records being locked? Reading on, it seems like you're trying to 
solve a fairly standard concurrency problem. Any RDBMS worth their salt can 
handle that for you, you don't need to manually do any of that.

> If other process want to lock same record, it can see this record (or got 
> violation error), so it stopped.
> 
> This is not for protect all tables, only for protect main entities have many 
> subtables like "Products", "Offers", etc.
> We can't use transactions, because in the editor they must post/commit 
> subdata.
> And because PG is different from other DBs, so if ANY of statements failed, 
> it rollback whole thing automatically (see my prior mail).
> In FireBird we can do simple record lock protection with main connection's 
> second transaction, but in MS and PG not.

This sounds much more like a use-case for sub-transactions and select for 
update (which puts a temporary RDBMS-controlled lock on the relevant records) 
than for manual locking.
See: http://www.postgresql.org/docs/9.5/static/sql-begin.html and 
http://www.postgresql.org/docs/9.5/static/sql-select.html

You might also want to look into transaction isolation levels: 
http://www.postgresql.org/docs/9.5/interactive/mvcc.html

As an example of how a concurrent workflow with the above goes:

Session 1:
begin;
savepoint offer_update;
select product_id from offers where offer_id = 1234567 for update;

Session 2:
begin;
savepoint offer_update;
select product_id from offers where offer_id = 1234567 for update;
update offers set discount = 0.10 where product_id = 1234567;
#   ERROR (the record is locked by session 1)
rollback to offer_update;

Session 1:
update offers set discount = 0.15 where product_id = 1234567;
#   success
commit;

Session 2: (retrying earlier update)
select product_id from offers where offer_id = 1234567 for update;
update offers set discount = 0.10 where product_id = 1234567;
#   success
commit; 

You'll need to add some logic to your application (that editor you were talking 
about) so that it inserts savepoints and handles failures of sub-transactions 
appropriately.

> So we used real records in a real table. But how to clean if client 
> disconnected without delete own records?
> For this we created own sessioninfo table with inner id, user id, timestamp, 
> and [connectionid, connectiontime].
> The locking mechanism checks for same lock (Offer, 117), if anybody locks the 
> record, it checks for he's on or not.
> If active connection (ID + TS) then lock is valid, and we can show 
> information that "who is editing, please ask for him to release, etc.".
> If not, we can eliminate the record and insert our.

It sounds to me like you're complicating your code where you could be 
simplifying it. Possibly, because you're used to a database that provides 
certain features to make up for the lack of others that are harder to 
implement. Both MS Access and Firebird are very much file-based desktop 
databases that are not really meant for concurrent access. The big RDBMSes (PG, 
MS SQL server, Oracle, DB2) are _designed_ for such workloads.

> The main goal is to protect the main entities. It is working in MS.
> My question was about how to get my client's connection timestamp as 
> get_backend_pid.
> But as you wrote I can get it from activity log. Because PID can't be same as 
> mine, I can select my from the table.
> 
> You said it have danger (like guid repetition). Yes, it have. And not the BAD 
> SYSADMIN, but the summer/winter time changes are dangerous (the backward 
> version). In PG we can extend our "guid" with IP and Port too, and this could 
> be enough safe for us.

In that case you should at least use UTC timestamps. Still, with such an 
implementation it will be hard to create a reliable system.

> Thanks
> 
> 
> 
> 
> 2016-04-09 16:05 GMT+02:00 Adrian Klaver :
> On 04/09/2016 01:30 AM, Durumdara wrote:
> Dear Everybody!
> 
> 
> In MS we had a "persistent lock" structure and method.
> This over  transactions because based on real records in a real table
> with pri key (tablename + id).
> 
> For garbaging we had a special session info.
> In MS the session id is smallint, so it can repeats after server
> restarts, but my coll. found a "session creation timestamp".
> This is a key which unique.
> With this we can check for died sessions and we can clean their records.
> 
> It might help to explain more what it is you are 

Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-10 Thread Durumdara
Dear Adrian!

Again. As I see the beginning blocks are removed by mailing system in the
code.

We have an "ourlocks" table which hold records (TableName, RecordID,
SessionInnerID, TimeStamp, etc, with TableName/RecordID prikey).

If anybody wants to lock record "for long time", "over the transactions" it
try to insert a new record here.
If other process want to lock same record, it can see this record (or got
violation error), so it stopped.

This is not for protect all tables, only for protect main entities have
many subtables like "Products", "Offers", etc.
We can't use transactions, because in the editor they must post/commit
subdata.
And because PG is different from other DBs, so if ANY of statements failed,
it rollback whole thing automatically (see my prior mail).
In FireBird we can do simple record lock protection with main connection's
second transaction, but in MS and PG not.

So we used real records in a real table. But how to clean if client
disconnected without delete own records?
For this we created own sessioninfo table with inner id, user id,
timestamp, and [connectionid, connectiontime].
The locking mechanism checks for same lock (Offer, 117), if anybody locks
the record, it checks for he's on or not.
If active connection (ID + TS) then lock is valid, and we can show
information that "who is editing, please ask for him to release, etc.".
If not, we can eliminate the record and insert our.

The main goal is to protect the main entities. It is working in MS.
My question was about how to get my client's connection timestamp as
get_backend_pid.
But as you wrote I can get it from activity log. Because PID can't be same
as mine, I can select my from the table.

You said it have danger (like guid repetition). Yes, it have. And not the
BAD SYSADMIN, but the summer/winter time changes are dangerous (the
backward version). In PG we can extend our "guid" with IP and Port too, and
this could be enough safe for us.

Thanks




2016-04-09 16:05 GMT+02:00 Adrian Klaver :

> On 04/09/2016 01:30 AM, Durumdara wrote:
>
>> Dear Everybody!
>>
>>
>> In MS we had a "persistent lock" structure and method.
>> This over  transactions because based on real records in a real table
>> with pri key (tablename + id).
>>
>> For garbaging we had a special session info.
>> In MS the session id is smallint, so it can repeats after server
>> restarts, but my coll. found a "session creation timestamp".
>> This is a key which unique.
>> With this we can check for died sessions and we can clean their records.
>>
>
> It might help to explain more what it is you are trying to achieve.
>
> First I am not sure what you mean by 'persistent lock', especially as it
> applies to Postgres?
>
> Second, I assume by garbaging you mean garbage collection of something?
> If that is the case what exactly are you garbage collecting?
> I see 'clean records', what records would these be?
> In particular, on Postgres, where are you going to do this?
>
>
>> We want create same mechanism.
>>
>
> If the above questions did not already cover this, what mechanism?
>
>
> I know there are adv. locks in PG, but I want to use session id.
>>
>> This could be:
>> |pg_backend_pid|()
>>
>> May pid repeats.
>> Where I can get timestamp or some other unique data with I can create a
>> combined primary key?
>>
>> Thanks for your help!
>>
>> dd
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-09 Thread Adrian Klaver

On 04/09/2016 01:30 AM, Durumdara wrote:

Dear Everybody!


In MS we had a "persistent lock" structure and method.
This over  transactions because based on real records in a real table
with pri key (tablename + id).

For garbaging we had a special session info.
In MS the session id is smallint, so it can repeats after server
restarts, but my coll. found a "session creation timestamp".
This is a key which unique.
With this we can check for died sessions and we can clean their records.


It might help to explain more what it is you are trying to achieve.

First I am not sure what you mean by 'persistent lock', especially as it 
applies to Postgres?


Second, I assume by garbaging you mean garbage collection of something?
If that is the case what exactly are you garbage collecting?
I see 'clean records', what records would these be?
In particular, on Postgres, where are you going to do this?



We want create same mechanism.


If the above questions did not already cover this, what mechanism?


I know there are adv. locks in PG, but I want to use session id.

This could be:
|pg_backend_pid|()

May pid repeats.
Where I can get timestamp or some other unique data with I can create a
combined primary key?

Thanks for your help!

dd




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-09 Thread Christoph Berg
Re: Stephen Frost 2016-04-09 <20160409115712.gt10...@tamriel.snowman.net>
> > Btw, what you are describing is exactly what %c in log_line_prefix
> > does.
> 
> That's not currently exposed at an SQL level anywhere though, is it?
> Perhaps we should add a way to get that.  Another thought would be to
> essentially expose 'log_line_prefix()' at the SQL level but I'm not sure
> that generally makes sense.  We could also have an explicit function to
> get MyStartTime, similar to pg_backend_pid().

I don't see how log_line_prefix() would make sense, but
pg_session_identifier() (= %c) would make sense to have, in the same
way that we have pg_backend_pid(), so the current session could be
located in the logs. (Not sure if pg_session_identifier shouldn't also
be a pg_stat_activity column, but that might be overkill. Maybe
SELECT pg_session_identifier(pid) from pg_stat_activity; ?)

MyStartTime is probably too specific.

Christoph


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-09 Thread Stephen Frost
* Christoph Berg (c...@df7cb.de) wrote:
> Re: Durumdara 2016-04-09 
> 
> > In MS the session id is smallint, so it can repeats after server restarts,
> > but my coll. found a "session creation timestamp".
> > This is a key which unique.
> > With this we can check for died sessions and we can clean their records.
> > 
> > We want create same mechanism.
> > I know there are adv. locks in PG, but I want to use session id.
> > 
> > This could be:
> > pg_backend_pid()
> > 
> > May pid repeats.
> > Where I can get timestamp or some other unique data with I can create a
> > combined primary key?
> 
> Btw, what you are describing is exactly what %c in log_line_prefix
> does.

That's not currently exposed at an SQL level anywhere though, is it?
Perhaps we should add a way to get that.  Another thought would be to
essentially expose 'log_line_prefix()' at the SQL level but I'm not sure
that generally makes sense.  We could also have an explicit function to
get MyStartTime, similar to pg_backend_pid().

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-09 Thread Durumdara
Dear John!

This is not about MS.

This is about: how to get unique session_id. This could be "pid" + "client
starting time".
The pid isn't unique, where I can get "client's starting time"?

But I think the solution is pg_activity view.

There is no same PID as mine, because no one have same subprocess as mine.

So I can filter to my line. This contains my client's starting time.
Anybody's record had same PID in the past can be eliminated.
Other PID's are invalid if were not in it the view.

If Lock_Is_On(aTable, aIDValue, HisPID, HisClientStartTS):

p = GetMyPID

bCanUnlock = False
if p = HisPID:

// This is mine PID - but really mine?

ts = SelectClientStartTSFromActivityLog(p)

if ts = HisClientStartTS:



// It's mine

pass



else:



// The pid is same, but client's starting time is different - we can unlock
it, this is not mine

bCanUnlock = True



 else:

// This is not mine PID, we can check for existence here

if HaveClientPIDAndStartTSinActivityLog(HisPID, HisTS) = False:

bCanUnlock = True // No one have it, he was off





if bCanUnlock:

RemoveTableLock(aTable, aID)

else:

raise Exception('Cannot lock the object because locked by another user!')



Thanks for your inspiration!


2016-04-09 12:30 GMT+02:00 John R Pierce :

> On 4/9/2016 1:30 AM, Durumdara wrote:
>
>> In MS we had...
>>
>
> If you want Microsoft's unique version of SQL, run Microsoft SQL. That
> stuff you describe is a whole bunch of implementation specific wierdness
> from the standpoint of someone outside, looking in..
>
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-09 Thread Stephen Frost
Greetings,

* Durumdara (durumd...@gmail.com) wrote:
> Where I can get timestamp or some other unique data with I can create a
> combined primary key?

Not entirely sure about the rest of it, but if you want backend start
time, you can look at pg_stat_activity (which also happens to have the
pid).

If you want just your own, combine it with pg_backend_pid, as in:

select
  pid || ',' || backend_start
from pg_stat_activity where pid = pg_backend_pid();

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-09 Thread John R Pierce

On 4/9/2016 1:30 AM, Durumdara wrote:

In MS we had...


If you want Microsoft's unique version of SQL, run Microsoft SQL. That 
stuff you describe is a whole bunch of implementation specific wierdness 
from the standpoint of someone outside, looking in..





--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-09 Thread Christoph Berg
Re: Durumdara 2016-04-09 

> In MS the session id is smallint, so it can repeats after server restarts,
> but my coll. found a "session creation timestamp".
> This is a key which unique.
> With this we can check for died sessions and we can clean their records.
> 
> We want create same mechanism.
> I know there are adv. locks in PG, but I want to use session id.
> 
> This could be:
> pg_backend_pid()
> 
> May pid repeats.
> Where I can get timestamp or some other unique data with I can create a
> combined primary key?

Btw, what you are describing is exactly what %c in log_line_prefix
does.

Christoph


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-09 Thread Durumdara
Dear Everybody!


In MS we had a "persistent lock" structure and method.
This over  transactions because based on real records in a real table with
pri key (tablename + id).

For garbaging we had a special session info.
In MS the session id is smallint, so it can repeats after server restarts,
but my coll. found a "session creation timestamp".
This is a key which unique.
With this we can check for died sessions and we can clean their records.

We want create same mechanism.
I know there are adv. locks in PG, but I want to use session id.

This could be:
pg_backend_pid()

May pid repeats.
Where I can get timestamp or some other unique data with I can create a
combined primary key?

Thanks for your help!

dd