Re: CoreData database sharing and migration

2010-03-23 Thread Ben Trumbull

On Mar 22, 2010, at 4:06 AM, Steve Steinitz wrote:

> On 18/3/10, Ben Trumbull wrote:
> 
>> there wasn't a good solution for multiple machines simultaneously
>> accessing an SQLite db file (or most other incrementally updated
>> binary file formats).  By "good" I mean a solution that worked
>> reliably and didn't make other more important things work less well.
> 
> I'm curious about the reliability issues you saw.  Also, by less well do you 
> mean slower?

Because the different NFS clients have file caches with differing amounts of 
staleness, and the SQLite db is updated incrementally, it's possible for an NFS 
client to think it has the latest blocks, and then derive material from one and 
write it into another (it is after all a b-tree).  The written blocks have 
implicit dependencies on all the other active blocks in the database, so having 
stale data is bad.

>> For nearly all Mac OS X customers (sadly not you) achieving a near
>> 100x performance boost when accessing database files on an AFP or SMB
>> mount (like their home directory in educational deployments) is pretty
>> huge.
> 
> I agree.  But wouldn't those same educational institutions be prime 
> candidates for multiple machine access?

No.  The restriction is multiple physical machines using the same database 
files simultaneously (open).  While AFP will allow multiple logins to the same 
account when configured with an advanced setting, in general, AFP servers 
actually prevent users from multiple simultaneous logins to the same account.

>> To address both sets of problems on all network FS, we enforce a
>> single exclusive lock on the server for the duration the application
>> has the database open.  Closing the database connection (or logging
>> out) allows another machine to take its turn.
> 
> Could my application close the database connection and re-open it to work 
> around the problem?  How would I do that?  I suppose once I got it going I'd 
> have to retry saves, fetches etc.

In theory, one could, but in practice that won't be very manageable without 
architectural changes.  Something along the lines of open the remote 
connection, pull down interesting data and save it to a local file, and close 
the remote connection.  Given your current deployment set up provides adequate 
performance, and all you need is a bug fix, I'm not sure this would be very 
helpful.

>> You'll get the 10.5 performance characteristics, however.
> 
> Again, the 10.5 performance over gigabit ethernet is almost unbelievably 
> fast.  I may know why.  Despite your helpful explanations I'm still not 
> exactly clear on the relationship between caching and locking, but I wonder 
> if the speed we are seeing is helped by the fact that the entire database 
> fits into the Synology NAS's 128meg cache?

That probably doesn't hurt.

> In another message in this thread, you made a tantalizing statement:
> 
>> Each machine can have its own database and they can share their results
>> with NSDistributedNotification or some other IPC/networking protocol. You 
>> can hook into the NSManagedObjectContextDidSaveNotification to track
>> when one of the peers has committed changes locally.
> 
> Let me guess how that would work: before saving, the peer would create a 
> notification containing the inserted, updated and deleted objects as recorded 
> in the MOC.  The receiving machine would attempt to make those changes on its 
> own database.  Some questions:
> 
>Would that really be feasible?

yes, but as you observe, it's more tractable for simple data records than 
complex graphs with common merge conflicts

>Would it be a problem that each machine would have having different 
> primary keys?

yes

>How would the receiving machine identify the local objects that changed 
> remotely?

typically this is done with a UUID & a fetch.  Since each database on each 
client is different, the stores themselves have different UUIDs, and any 
encoded NSManagedObjectID URI will note which store the objectID came from, so 
you could also map them the objectID URIs to the local value directly.

>Could relationships (indeed the object graph itself) feasibly be 
> maintained?

yes.  Updates to to-many relationships require the use of a (additions, 
subtractions) pair instead of simply setting the new contents.

>How would relationships between the remote objects be identified?  
> Hand-parsing?

Either by UUID or objectID URI.

>Has anyone done it that you know of?

yes.  I'm aware of 4 solutions, however, I would only recommend 1 as 
appropriate for the general (skill, time, pain threshold) and it avoids complex 
relationship graphs.  Basic data record replication over DO.

>Is there sample code?


no.  The only real trick in converting the didSave notification into something 
appropriate for DO to consume is to copy it and replace the NSManagedObjects 
with a dictionary that has a UUID instead of an object ID, the attribute 
values, and the relationship 

re: CoreData database sharing and migration

2010-03-22 Thread Steve Steinitz

Hi Ben,

Thank you for your detailed and candid reply.  I have some 
comments and questions on a few points, below...


On 18/3/10, Ben Trumbull wrote:


there wasn't a good solution for multiple machines simultaneously
accessing an SQLite db file (or most other incrementally updated
binary file formats).  By "good" I mean a solution that worked
reliably and didn't make other more important things work less well.


I'm curious about the reliability issues you saw.  Also, by less 
well do you mean slower?



NFS is just not a free Oracle.


That's a great sound bite.

But to be clear, database cost is not the issue.  We're not all 
cheapskates looking to save a buck on our database :)  The real 
issue is that Core Data works with and only with SQLite.  If it 
worked with Oracle I'd raise a purchase order and do a little 
dance.  And I wouldn't sneer down my nose at mysql.



For nearly all Mac OS X customers (sadly not you) achieving a near
100x performance boost when accessing database files on an AFP or SMB
mount (like their home directory in educational deployments) is pretty
huge.


I agree.  But wouldn't those same educational institutions be 
prime candidates for multiple machine access?



So we focused on making the experience that could work well work even
better.  10.6 is a significantly better network FS client as Apple
applications like Mail don't slam the servers with byte range lock
requests all the time (good for NFS), and on AFP also gets to use
local file caching.


That makes sense and sounds good.


To address both sets of problems on all network FS, we enforce a
single exclusive lock on the server for the duration the application
has the database open.  Closing the database connection (or logging
out) allows another machine to take its turn.


Could my application close the database connection and re-open 
it to work around the problem?  How would I do that?  I suppose 
once I got it going I'd have to retry saves, fetches etc.



This behavior was supposed to be opt in for Core Data apps, but on 10.6.2 is 
not.


Opt in would be ideal -- especially dynamically configurable opt 
in (for when the CEO wants to access the system from home at night).



connecting from five machines to a single database on a Synology
network drive over afp.  ...latest values from the database during
idle time.



It can work technically on AFP.  However, the distributed cache
coherency problem is avoided by these network FS because they don't do
any file caching on files with locks.  Your server set up and
networking hardware is pretty sophisticated compared to most so the
performance is adequate.


Actually, the performance is nearly as fast as a local hard 
disk.  My measuring system is not perfect but I've seen sql 
select times of 0.003 seconds on (well-indexed) tables with tens 
of thousands of records.  Sometimes I wonder if that's even possible.



As an engineer, I would wish AFP over VPN over Airport was the more
uncommon deployment scenario, but sadly not.


I can see how that would create pressure on your engineering 
team.  Also, I confess I'd like to set up exactly that scenario 
for the CEO :)


 There are mysterious but harmless optimistic locking errors 
once  in a while where no attributes appear have changed



Those mysterious updates are probably version numbers being bumped
because the object's participation in a to-many relationship, either
outgoing or incoming, changed.


Thanks for confirming that.  My conflict logging avoids showing 
related objects.



10.6.2 only one machine can connect to a Core Data store.



ADC did pass your bug report along to us,


For the record, you were my first, very helpful, contact 
regarding the issue.  Only later did I file a bug report.



and it is a backward binary compatibility issue, and a regression from
10.6.0.  It will be fixed in a future update.


I'm relieved that you are clear about that.  I could easily see 
it slipping through the cracks.



You'll get the 10.5 performance characteristics, however.


Again, the 10.5 performance over gigabit ethernet is almost 
unbelievably fast.  I may know why.  Despite your helpful 
explanations I'm still not exactly clear on the relationship 
between caching and locking, but I wonder if the speed we are 
seeing is helped by the fact that the entire database fits into 
the Synology NAS's 128meg cache?



[for status updates regarding your bug] You should follow up with ADC


ADC explained to me that its best to add status requests 
directly to the bug report.  Is that what you mean?



or Evangelism directly


That sounds like a great idea, but I was unable to find the 
Evangelism contact information.  Can you (or anyone on the list) 
point me in the right direction?


In another message in this thread, you made a tantalizing statement:


Each machine can have its own database and they can share their results
with NSDistributedNotification or some other IPC/networking 
protocol. You can hook into the 

Re: CoreData database sharing and migration

2010-03-20 Thread Ben Trumbull
I'm not sure I understand your question.  "How" is they just use Core Data with 
our SQLite NSPersistentStore and it works for multiple processes on a single 
machine.  For most but not all customers, that's coordinated with POSIX byte 
range advisory locks (fcntl).  "Why" is to have Cocoa frameworks vending the 
user's Contacts and Events to various apps through an API.  Mail, iCal, and 
Address Book may all be working with Contacts data simultaneously.  etc.

- Ben

On Mar 19, 2010, at 12:42 PM, Aurélien Hugelé wrote:

> Hi Ben,
> 
> Can you be kind enough to explain how Apple frameworks do that ? (multiple 
> processes, single machine, one SQLite database)
> I'm thinking of CalendarStore, AddressBook framework. Can you describe what 
> is the general idea behind those Apple frameworks?
> 
> 
> Aurélien,
> Objective Decision Team
> 
> 
> 
> 
> On 17 mars 2010, at 22:29, Ben Trumbull wrote:
> 
>>> I am wondering whether it is possible to create a database in core  
>>> data that can be opened by more than one application at the same time.  
>>> It is currently impossible to handle one SQLite database with two  
>>> instances of the same app. The problem is if user1 quits the app, the  
>>> data is saved but user2's instance of the app doesn't recognize this  
>>> file system change and just overwrites its version in memory. So the  
>>> data from user1 is gone. Is there a way I can handle this?
>>> 
>>> Second -- I am having more than two database versions now but still  
>>> want to support my version 1.0 but the mapping model only allows one  
>>> source model as well as only one target model. I would have to remove  
>>> one version but that makes version 1.0 users' database unusable.
>>> 
>>> Has anyone gotten something like this to work?
>> 
>> Yes, several Apple frameworks use Core Data databases from multiple 
>> processes simultaneously with a single user account and single physical 
>> machine.
>> 
>> Do you mean "more than one application simultaneously on more than one 
>> physical computer over NFS/AFP/SMB" ?  Don't do that.
>> 
>> Or do you mean an NSDocument based application using Core Data & an SQLite 
>> store ?  NSDocuments intentionally behave like TextEdit.  Last writer wins, 
>> overwites everything.  If so, you should be using a non-document based Core 
>> Data project template.
>> 
>> - Ben
>> 
>> ___
>> 
>> Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)
>> 
>> Please do not post admin requests or moderator comments to the list.
>> Contact the moderators at cocoa-dev-admins(at)lists.apple.com
>> 
>> Help/Unsubscribe/Update your Subscription:
>> http://lists.apple.com/mailman/options/cocoa-dev/hugele.aurelien%40objective-decision.com
>> 
>> This email sent to hugele.aurel...@objective-decision.com
> 


- Ben



___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: CoreData database sharing and migration

2010-03-20 Thread Tobias Jordan

Hi Ben,

Does this mean I can't use any mapping models anymore since they only  
update from one data model to another one? What's the right class to  
do it manually?

Thanks.

- Tobias

On Mar 17, 2010, at 11:35 PM, Ben Trumbull wrote:

About the second question -- do you know how to solve the migration  
with more than two data models?


From a specific NSPersistentStore's perspective there is only ever 1  
data model.  When you use multiple models and merge them together,  
then *union* is "the" data model.  The object you pass to the  
NSPersistentStoreCoordinator initializer is the one true data model  
regardless of whether it only exists in memory or how many separate  
model files were used to compose it.


When you migrate a store that is built on the union of multiple  
models, you must migrate the union to a new union.  You cannot  
migrate things piecemeal.  However, as you can merge models together  
at runtime, so too could you construct mapping models  
programmatically to handle the union.   However, that's probably  
more trouble than it's worth.  You should focus on migrating the  
union model that was used with the store as its own entity.


It might be easier to think about how you would give each schema a  
version number.  Each union of models is likely to be its own  
version, and you would migrate to a new union with a new version  
number.


- Ben


On Mar 17, 2010, at 3:28 PM, Tobias Jordan wrote:


Hi Ben,

Thanks so much for this brilliant suggestion, I haven't thought  
about something like this before but it's actually really fantastic.
About the second question -- do you know how to solve the migration  
with more than two data models?


- Tobias

On Mar 17, 2010, at 11:13 PM, Ben Trumbull wrote:



On Mar 17, 2010, at 2:59 PM, Tobias Jordan wrote:


Hello Ben,

Thanks a lot for responding! My problem is as follows: The  
database which is currently a non-document based core data SQLite  
one is normally stored in the local User Library of the user. (/ 
Users/user/Library/Application Support/MyApp/database.db


But there are cases in which two (or more) different physical  
machines must have access to the database.


Don't do that.  Network file systems do not provide real time  
distributed cache coherency.  NFS is not a free version of Oracle.


For example two designers working on a project and they both need  
the same database so they can share their results. This means  
they create a new database on their server and link my app to  
this database.


Each machine can have its own database and they can share their  
results with NSDistributedNotification or some other IPC/ 
networking protocol.  You can hook into the  
NSManagedObjectContextDidSaveNotification to track when one of the  
peers has committed changes locally.


- Ben



As you've said, is there a way the data can be always immediately  
written to disk so there's no 'last writer wins'?


I am not using NSDocument based techniques -- it is really just  
one core data DB.


Thank you!

Regards,
Tobias


On Mar 17, 2010, at 10:29 PM, Ben Trumbull wrote:

I am wondering whether it is possible to create a database in  
core
data that can be opened by more than one application at the  
same time.

It is currently impossible to handle one SQLite database with two
instances of the same app. The problem is if user1 quits the  
app, the
data is saved but user2's instance of the app doesn't recognize  
this
file system change and just overwrites its version in memory.  
So the

data from user1 is gone. Is there a way I can handle this?

Second -- I am having more than two database versions now but  
still
want to support my version 1.0 but the mapping model only  
allows one
source model as well as only one target model. I would have to  
remove

one version but that makes version 1.0 users' database unusable.

Has anyone gotten something like this to work?


Yes, several Apple frameworks use Core Data databases from  
multiple processes simultaneously with a single user account and  
single physical machine.


Do you mean "more than one application simultaneously on more  
than one physical computer over NFS/AFP/SMB" ?  Don't do that.


Or do you mean an NSDocument based application using Core Data &  
an SQLite store ?  NSDocuments intentionally behave like  
TextEdit.  Last writer wins, overwites everything.  If so, you  
should be using a non-document based Core Data project template.


- Ben






- Ben










___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: CoreData database sharing and migration

2010-03-20 Thread Aurélien Hugelé
Hi Ben,

Can you be kind enough to explain how Apple frameworks do that ? (multiple 
processes, single machine, one SQLite database)
I'm thinking of CalendarStore, AddressBook framework. Can you describe what is 
the general idea behind those Apple frameworks?


Aurélien,
Objective Decision Team




On 17 mars 2010, at 22:29, Ben Trumbull wrote:

>> I am wondering whether it is possible to create a database in core  
>> data that can be opened by more than one application at the same time.  
>> It is currently impossible to handle one SQLite database with two  
>> instances of the same app. The problem is if user1 quits the app, the  
>> data is saved but user2's instance of the app doesn't recognize this  
>> file system change and just overwrites its version in memory. So the  
>> data from user1 is gone. Is there a way I can handle this?
>> 
>> Second -- I am having more than two database versions now but still  
>> want to support my version 1.0 but the mapping model only allows one  
>> source model as well as only one target model. I would have to remove  
>> one version but that makes version 1.0 users' database unusable.
>> 
>> Has anyone gotten something like this to work?
> 
> Yes, several Apple frameworks use Core Data databases from multiple processes 
> simultaneously with a single user account and single physical machine.
> 
> Do you mean "more than one application simultaneously on more than one 
> physical computer over NFS/AFP/SMB" ?  Don't do that.
> 
> Or do you mean an NSDocument based application using Core Data & an SQLite 
> store ?  NSDocuments intentionally behave like TextEdit.  Last writer wins, 
> overwites everything.  If so, you should be using a non-document based Core 
> Data project template.
> 
> - Ben
> 
> ___
> 
> Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)
> 
> Please do not post admin requests or moderator comments to the list.
> Contact the moderators at cocoa-dev-admins(at)lists.apple.com
> 
> Help/Unsubscribe/Update your Subscription:
> http://lists.apple.com/mailman/options/cocoa-dev/hugele.aurelien%40objective-decision.com
> 
> This email sent to hugele.aurel...@objective-decision.com

___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


re: CoreData database sharing and migration

2010-03-18 Thread Ben Trumbull
> On 17/3/10, cocoa-dev-requ...@lists.apple.com wrote:
> 
>> Do you mean "more than one application simultaneously on more than one 
>> physical computer over NFS/AFP/SMB" ?  Don't do that.
> 
> When did that become the official policy, Ben?

The short answer is in 10.6.

The longer answer is that there are two classes of problems, one for NFS and 
one for AFP/SMB.  

For NFS, the protocol does not reliably allow for clients to maintain their 
local file caches coherently with the server in real time.  Only the newest NFS 
servers even respect file cache coherency around byte range file locks **at 
all**.  Unfortunately, the latest protocol doesn't mesh well with SQLite or 
most other existing incrementally updated file formats.   Many deployed NFS 
servers and clients only provide on "close to open" coherency.  File 
modification timestamps on NFS severs may (or may not) provide accuracy better 
than 1.0 seconds.  And so forth.  There's also no good way to perform a cache 
bypassing file read on OSX or selectively evict ranges of files from the local 
file cache by hand.  We churned on this for a while with various teams, and 
there wasn't a good solution for multiple machines simultaneously accessing an 
SQLite db file (or most other incrementally updated binary file formats).  By 
"good" I mean a solution that worked reliably and didn't make other more 
important things work less well.  

NFS is just not a free Oracle.  Software that wants real time distributed cache 
coherency needs to use IPC and mange the problem themselves.   It is trivial to 
write a program that writes to a file on NFS and sends the same update to its 
clone on another machine via IPC and for its clone to verify that NFS cache 
coherency indeed fails regularly (e.g. file read bytes != IPC read bytes).  
This is what I mean by real time distributed cache coherency.

For AFP & SMB, the problem is different.  These FS do not support POSIX 
advisory byte range locks at all.  They only support mandatory locks.  
Consequently, they never cache data read from files with any existing locks at 
all.  No file caching means all the I/O is slow.  Painfully slow.  AFP over 
Airport without file caching is bad.  The I/O throughput on a file free of 
locks on AFP is close to 100x better than a file with a single byte locked that 
isn't even anywhere near where you are reading.  For nearly all Mac OS X 
customers (sadly not you) achieving a near 100x performance boost when 
accessing database files on an AFP or SMB mount (like their home directory in 
educational deployments) is pretty huge. 

So we focused on making the experience that could work well work even better.  
10.6 is a significantly better network FS client as Apple applications like 
Mail don't slam the servers with byte range lock requests all the time (good 
for NFS), and on AFP also gets to use local file caching.

To address both sets of problems on all network FS, we enforce a single 
exclusive lock on the server for the duration the application has the database 
open.  Closing the database connection (or logging out) allows another machine 
to take its turn.  This behavior was supposed to be opt in for Core Data apps, 
but on 10.6.2 is not.

> I'm doing that with some success.  For the past three years, my 
> client's point of sale system has been connecting from five 
> machines to a single database on a Synology network drive over 
> afp.  I had to write some code to periodically get the latest 
> values from the database during idle time.  That was a little 
> complicated but its working well now.

It can work technically on AFP.  However, the distributed cache coherency 
problem is avoided by these network FS because they don't do any file caching 
on files with locks.  Your server set up and networking hardware is pretty 
sophisticated compared to most so the performance is adequate.  As an engineer, 
I would wish AFP over VPN over Airport was the more uncommon deployment 
scenario, but sadly not.

> There are mysterious but harmless optimistic locking errors once 
> in a while where no attributes appear have changed -- just to 
> keep me on my toes, along with an occasional real data collision 
> (two staff members carelessly editing the same object) but we've 
> had no real issues in a year or so.

Those mysterious updates are probably version numbers being bumped because the 
object's participation in a to-many relationship, either outgoing or incoming, 
changed.

> However, 10.6.2 has a bug where only one machine can connect to 
> a Core Data store (its likely a sqlite-level connection issue -- 
> but I'm not sure).

ADC did pass your bug report along to us, and it is a backward binary 
compatibility issue, and a regression from 10.6.0.  It will be fixed in a 
future update.  You'll get the 10.5 performance characteristics, however.

> So, for a while we were down to one 
> machine.  I eventually had to roll the five machines back to 
> Leopard.  That remains a PR nig

Re: CoreData database sharing and migration

2010-03-17 Thread Ben Trumbull
> About the second question -- do you know how to solve the migration with more 
> than two data models?

From a specific NSPersistentStore's perspective there is only ever 1 data 
model.  When you use multiple models and merge them together, then *union* is 
"the" data model.  The object you pass to the NSPersistentStoreCoordinator 
initializer is the one true data model regardless of whether it only exists in 
memory or how many separate model files were used to compose it.

When you migrate a store that is built on the union of multiple models, you 
must migrate the union to a new union.  You cannot migrate things piecemeal.  
However, as you can merge models together at runtime, so too could you 
construct mapping models programmatically to handle the union.   However, 
that's probably more trouble than it's worth.  You should focus on migrating 
the union model that was used with the store as its own entity.

It might be easier to think about how you would give each schema a version 
number.  Each union of models is likely to be its own version, and you would 
migrate to a new union with a new version number.

- Ben


On Mar 17, 2010, at 3:28 PM, Tobias Jordan wrote:

> Hi Ben,
> 
> Thanks so much for this brilliant suggestion, I haven't thought about 
> something like this before but it's actually really fantastic.
> About the second question -- do you know how to solve the migration with more 
> than two data models?
> 
> - Tobias
> 
> On Mar 17, 2010, at 11:13 PM, Ben Trumbull wrote:
> 
>> 
>> On Mar 17, 2010, at 2:59 PM, Tobias Jordan wrote:
>> 
>>> Hello Ben,
>>> 
>>> Thanks a lot for responding! My problem is as follows: The database which 
>>> is currently a non-document based core data SQLite one is normally stored 
>>> in the local User Library of the user. (/Users/user/Library/Application 
>>> Support/MyApp/database.db
>>> 
>>> But there are cases in which two (or more) different physical machines must 
>>> have access to the database.
>> 
>> Don't do that.  Network file systems do not provide real time distributed 
>> cache coherency.  NFS is not a free version of Oracle.
>> 
>>> For example two designers working on a project and they both need the same 
>>> database so they can share their results. This means they create a new 
>>> database on their server and link my app to this database.
>> 
>> Each machine can have its own database and they can share their results with 
>> NSDistributedNotification or some other IPC/networking protocol.  You can 
>> hook into the NSManagedObjectContextDidSaveNotification to track when one of 
>> the peers has committed changes locally.
>> 
>> - Ben
>> 
>>> 
>>> As you've said, is there a way the data can be always immediately written 
>>> to disk so there's no 'last writer wins'?
>>> 
>>> I am not using NSDocument based techniques -- it is really just one core 
>>> data DB.
>>> 
>>> Thank you!
>>> 
>>> Regards,
>>> Tobias
>>> 
>>> 
>>> On Mar 17, 2010, at 10:29 PM, Ben Trumbull wrote:
>>> 
> I am wondering whether it is possible to create a database in core
> data that can be opened by more than one application at the same time.
> It is currently impossible to handle one SQLite database with two
> instances of the same app. The problem is if user1 quits the app, the
> data is saved but user2's instance of the app doesn't recognize this
> file system change and just overwrites its version in memory. So the
> data from user1 is gone. Is there a way I can handle this?
> 
> Second -- I am having more than two database versions now but still
> want to support my version 1.0 but the mapping model only allows one
> source model as well as only one target model. I would have to remove
> one version but that makes version 1.0 users' database unusable.
> 
> Has anyone gotten something like this to work?
 
 Yes, several Apple frameworks use Core Data databases from multiple 
 processes simultaneously with a single user account and single physical 
 machine.
 
 Do you mean "more than one application simultaneously on more than one 
 physical computer over NFS/AFP/SMB" ?  Don't do that.
 
 Or do you mean an NSDocument based application using Core Data & an SQLite 
 store ?  NSDocuments intentionally behave like TextEdit.  Last writer 
 wins, overwites everything.  If so, you should be using a non-document 
 based Core Data project template.
 
 - Ben
 
>>> 
>> 
>> 
>> - Ben
>> 
>> 
>> 
> 


___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


re: CoreData database sharing and migration

2010-03-17 Thread Steve Steinitz

Hello

On 17/3/10, cocoa-dev-requ...@lists.apple.com wrote:


Do you mean "more than one application simultaneously on more than one physical 
computer over NFS/AFP/SMB" ?  Don't do that.


When did that become the official policy, Ben?

I'm doing that with some success.  For the past three years, my 
client's point of sale system has been connecting from five 
machines to a single database on a Synology network drive over 
afp.  I had to write some code to periodically get the latest 
values from the database during idle time.  That was a little 
complicated but its working well now.


There are mysterious but harmless optimistic locking errors once 
in a while where no attributes appear have changed -- just to 
keep me on my toes, along with an occasional real data collision 
(two staff members carelessly editing the same object) but we've 
had no real issues in a year or so.


However, 10.6.2 has a bug where only one machine can connect to 
a Core Data store (its likely a sqlite-level connection issue -- 
but I'm not sure).  So, for a while we were down to one 
machine.  I eventually had to roll the five machines back to 
Leopard.  That remains a PR nightmare.


Best regards,

Steve

___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: CoreData database sharing and migration

2010-03-17 Thread Tobias Jordan

Hi Ben,

Thanks so much for this brilliant suggestion, I haven't thought about  
something like this before but it's actually really fantastic.
About the second question -- do you know how to solve the migration  
with more than two data models?


- Tobias

On Mar 17, 2010, at 11:13 PM, Ben Trumbull wrote:



On Mar 17, 2010, at 2:59 PM, Tobias Jordan wrote:


Hello Ben,

Thanks a lot for responding! My problem is as follows: The database  
which is currently a non-document based core data SQLite one is  
normally stored in the local User Library of the user. (/Users/user/ 
Library/Application Support/MyApp/database.db


But there are cases in which two (or more) different physical  
machines must have access to the database.


Don't do that.  Network file systems do not provide real time  
distributed cache coherency.  NFS is not a free version of Oracle.


For example two designers working on a project and they both need  
the same database so they can share their results. This means they  
create a new database on their server and link my app to this  
database.


Each machine can have its own database and they can share their  
results with NSDistributedNotification or some other IPC/networking  
protocol.  You can hook into the  
NSManagedObjectContextDidSaveNotification to track when one of the  
peers has committed changes locally.


- Ben



As you've said, is there a way the data can be always immediately  
written to disk so there's no 'last writer wins'?


I am not using NSDocument based techniques -- it is really just one  
core data DB.


Thank you!

Regards,
Tobias


On Mar 17, 2010, at 10:29 PM, Ben Trumbull wrote:


I am wondering whether it is possible to create a database in core
data that can be opened by more than one application at the same  
time.

It is currently impossible to handle one SQLite database with two
instances of the same app. The problem is if user1 quits the app,  
the
data is saved but user2's instance of the app doesn't recognize  
this
file system change and just overwrites its version in memory. So  
the

data from user1 is gone. Is there a way I can handle this?

Second -- I am having more than two database versions now but still
want to support my version 1.0 but the mapping model only allows  
one
source model as well as only one target model. I would have to  
remove

one version but that makes version 1.0 users' database unusable.

Has anyone gotten something like this to work?


Yes, several Apple frameworks use Core Data databases from  
multiple processes simultaneously with a single user account and  
single physical machine.


Do you mean "more than one application simultaneously on more than  
one physical computer over NFS/AFP/SMB" ?  Don't do that.


Or do you mean an NSDocument based application using Core Data &  
an SQLite store ?  NSDocuments intentionally behave like  
TextEdit.  Last writer wins, overwites everything.  If so, you  
should be using a non-document based Core Data project template.


- Ben






- Ben





___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: CoreData database sharing and migration

2010-03-17 Thread Ben Trumbull

On Mar 17, 2010, at 2:59 PM, Tobias Jordan wrote:

> Hello Ben,
> 
> Thanks a lot for responding! My problem is as follows: The database which is 
> currently a non-document based core data SQLite one is normally stored in the 
> local User Library of the user. (/Users/user/Library/Application 
> Support/MyApp/database.db
> 
> But there are cases in which two (or more) different physical machines must 
> have access to the database.

Don't do that.  Network file systems do not provide real time distributed cache 
coherency.  NFS is not a free version of Oracle.

> For example two designers working on a project and they both need the same 
> database so they can share their results. This means they create a new 
> database on their server and link my app to this database.

Each machine can have its own database and they can share their results with 
NSDistributedNotification or some other IPC/networking protocol.  You can hook 
into the NSManagedObjectContextDidSaveNotification to track when one of the 
peers has committed changes locally.

- Ben

> 
> As you've said, is there a way the data can be always immediately written to 
> disk so there's no 'last writer wins'?
> 
> I am not using NSDocument based techniques -- it is really just one core data 
> DB.
> 
> Thank you!
> 
> Regards,
> Tobias
> 
> 
> On Mar 17, 2010, at 10:29 PM, Ben Trumbull wrote:
> 
>>> I am wondering whether it is possible to create a database in core
>>> data that can be opened by more than one application at the same time.
>>> It is currently impossible to handle one SQLite database with two
>>> instances of the same app. The problem is if user1 quits the app, the
>>> data is saved but user2's instance of the app doesn't recognize this
>>> file system change and just overwrites its version in memory. So the
>>> data from user1 is gone. Is there a way I can handle this?
>>> 
>>> Second -- I am having more than two database versions now but still
>>> want to support my version 1.0 but the mapping model only allows one
>>> source model as well as only one target model. I would have to remove
>>> one version but that makes version 1.0 users' database unusable.
>>> 
>>> Has anyone gotten something like this to work?
>> 
>> Yes, several Apple frameworks use Core Data databases from multiple 
>> processes simultaneously with a single user account and single physical 
>> machine.
>> 
>> Do you mean "more than one application simultaneously on more than one 
>> physical computer over NFS/AFP/SMB" ?  Don't do that.
>> 
>> Or do you mean an NSDocument based application using Core Data & an SQLite 
>> store ?  NSDocuments intentionally behave like TextEdit.  Last writer wins, 
>> overwites everything.  If so, you should be using a non-document based Core 
>> Data project template.
>> 
>> - Ben
>> 
> 


- Ben



___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: CoreData database sharing and migration

2010-03-17 Thread Tobias Jordan

Hello Ben,

Thanks a lot for responding! My problem is as follows: The database  
which is currently a non-document based core data SQLite one is  
normally stored in the local User Library of the user. (/Users/user/ 
Library/Application Support/MyApp/database.db


But there are cases in which two (or more) different physical machines  
must have access to the database. For example two designers working on  
a project and they both need the same database so they can share their  
results. This means they create a new database on their server and  
link my app to this database.


As you've said, is there a way the data can be always immediately  
written to disk so there's no 'last writer wins'?


I am not using NSDocument based techniques -- it is really just one  
core data DB.


Thank you!

Regards,
Tobias


On Mar 17, 2010, at 10:29 PM, Ben Trumbull wrote:


I am wondering whether it is possible to create a database in core
data that can be opened by more than one application at the same  
time.

It is currently impossible to handle one SQLite database with two
instances of the same app. The problem is if user1 quits the app, the
data is saved but user2's instance of the app doesn't recognize this
file system change and just overwrites its version in memory. So the
data from user1 is gone. Is there a way I can handle this?

Second -- I am having more than two database versions now but still
want to support my version 1.0 but the mapping model only allows one
source model as well as only one target model. I would have to remove
one version but that makes version 1.0 users' database unusable.

Has anyone gotten something like this to work?


Yes, several Apple frameworks use Core Data databases from multiple  
processes simultaneously with a single user account and single  
physical machine.


Do you mean "more than one application simultaneously on more than  
one physical computer over NFS/AFP/SMB" ?  Don't do that.


Or do you mean an NSDocument based application using Core Data & an  
SQLite store ?  NSDocuments intentionally behave like TextEdit.   
Last writer wins, overwites everything.  If so, you should be using  
a non-document based Core Data project template.


- Ben



___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


re: CoreData database sharing and migration

2010-03-17 Thread Ben Trumbull
> I am wondering whether it is possible to create a database in core  
> data that can be opened by more than one application at the same time.  
> It is currently impossible to handle one SQLite database with two  
> instances of the same app. The problem is if user1 quits the app, the  
> data is saved but user2's instance of the app doesn't recognize this  
> file system change and just overwrites its version in memory. So the  
> data from user1 is gone. Is there a way I can handle this?
> 
> Second -- I am having more than two database versions now but still  
> want to support my version 1.0 but the mapping model only allows one  
> source model as well as only one target model. I would have to remove  
> one version but that makes version 1.0 users' database unusable.
> 
> Has anyone gotten something like this to work?

Yes, several Apple frameworks use Core Data databases from multiple processes 
simultaneously with a single user account and single physical machine.

Do you mean "more than one application simultaneously on more than one physical 
computer over NFS/AFP/SMB" ?  Don't do that.

Or do you mean an NSDocument based application using Core Data & an SQLite 
store ?  NSDocuments intentionally behave like TextEdit.  Last writer wins, 
overwites everything.  If so, you should be using a non-document based Core 
Data project template.

- Ben

___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


CoreData database sharing and migration

2010-03-17 Thread Tobias Jordan

Hi Folks,

I am wondering whether it is possible to create a database in core  
data that can be opened by more than one application at the same time.  
It is currently impossible to handle one SQLite database with two  
instances of the same app. The problem is if user1 quits the app, the  
data is saved but user2's instance of the app doesn't recognize this  
file system change and just overwrites its version in memory. So the  
data from user1 is gone. Is there a way I can handle this?


Second -- I am having more than two database versions now but still  
want to support my version 1.0 but the mapping model only allows one  
source model as well as only one target model. I would have to remove  
one version but that makes version 1.0 users' database unusable.


Has anyone gotten something like this to work?

Thanks a lot in advance!

Best regards,
Tobias J.
___

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com