Oh? Thanks - I hadn’t heard of such a thing. This would be Amazon RDS which is 
basically MySQL and likely has similar.

Either way I guess I should delete any dups, they either cause an error 
somewhere or at least integrity problems I don’t need.

Is there a good wiki page on migration examples out there?



> On Nov 22, 2021, at 11:19 AM, Aaron Rosenzweig <aa...@chatnbike.com> wrote:
> 
> Sounds like you are using Postgres?
> 
> You can use the syntax “not valid” when you create a constraint to stop the 
> bleeding immediately. It will then only check for new and modified records 
> allowing the bad rows to co-exist. When you get around to it, you can remove 
> the duplicates. 
> 
> If it’s another database, they likely have something similar. 
> 
>> On Nov 22, 2021, at 10:18 AM, Jesse Tayler <jtay...@oeinc.com> wrote:
>> 
>> It’s not a compound key so much as just policy — it’s a handle for social 
>> service and so there should just be one row with that value and don’t need 
>> to tie into the key
>> 
>> I guess I can create a unique index just for that one attribute and it would 
>> presumedly return an error upon save. I should re-write the EO to handle 
>> that error raise and respond by returning the existing object…
>> 
>> I guess that is not hard to figure if that approach sounds sane.
>> 
>> I do have dups and I’d guess the constraint will simply fail if the database 
>> has any dups in it.
>> 
>> I guess writing a migration to handle / remove dups is not practical so I’d 
>> likely remove them by hand, then add the constraint in a migration update 
>> that would gently fail until there are no more dups…
>> 
>> 
>> 
>>> On Nov 22, 2021, at 10:07 AM, Samuel Pelletier <sam...@samkar.com> wrote:
>>> 
>>> Jesse,
>>> 
>>> So your row have a primary key and some other unique identifier derived 
>>> other attributes.
>>> 
>>> If the compound key is a combinaison of full attribute values, you cana a 
>>> compound unique key in the database. CREATE UNIQUE INDEX ON Table (col1, 
>>> col2, ..., coln)
>>> 
>>> If it is from partial values, the most reliable way is to add a string 
>>> column with the computed key with it's unique constraint.
>>> 
>>> If you already have duplicate, you can add a method in the migration to 
>>> resolve them before adding the constraint or do it manually...
>>> 
>>> Regards,
>>> 
>>> Samuel
>>> 
>>>> Le 22 nov. 2021 à 09:27, Jesse Tayler <jtay...@oeinc.com> a écrit :
>>>> 
>>>> It’s likely just a unique constraint perhaps.
>>>> 
>>>> It’s not UIDs or primary keys it’s a unique row type based on a couple 
>>>> strings where there should be only one, and that one should last forever.
>>>> 
>>>> There’s an API where calls can come in basically at the same time and 
>>>> instead of fetching first to see if the object exists, I should likely 
>>>> respond to an SQL error rejecting a new row and then fetch and return that 
>>>> existing object based on that error condition.
>>>> 
>>>> I’d suppose the database is the best place for that policy, but I don’t 
>>>> think I’ve implemented constraints quite like that before so I’d need to 
>>>> write some sort of Migrations for it if it’s to be reliable in all those 
>>>> situations where it might encounter duplicate data…hmmm…
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> On Nov 22, 2021, at 8:59 AM, Samuel Pelletier <sam...@samkar.com> wrote:
>>>>> 
>>>>> Hi Jesse,
>>>>> 
>>>>> Your question may have multiple answers, can you describe the contexts 
>>>>> and duplicate sources you fear ?
>>>>> 
>>>>> Is the primary key generated by the WO app or it is external (like a 
>>>>> GUID) ?
>>>>> 
>>>>> Do you have a secondary identifier that should be unique ?
>>>>> 
>>>>> Anyway, you should add constraint in to the database if uniqueness is 
>>>>> required (this apply to all frameworks in all language)
>>>>> 
>>>>> If you use EOF primary key generation, you should not have problems with 
>>>>> duplicate keys. If you require high throughput, using UUID primary key or 
>>>>> implementing a custom generator will help by saving round trips to the 
>>>>> database server. If you insert in batch, it will be also faster than 
>>>>> individual inserts.
>>>>> 
>>>>> Regards,
>>>>> 
>>>>> Samuel
>>>>> 
>>>>>> Le 22 nov. 2021 à 08:34, Jesse Tayler via Webobjects-dev 
>>>>>> <webobjects-dev@lists.apple.com> a écrit :
>>>>>> 
>>>>>> I asked on slack but I figured I’d ping the list
>>>>>> 
>>>>>> Who has a good way to ensure a serial EO creation queue when the system 
>>>>>> could be hit really fast and you must avoid duplicate entries?
>>>>>> 
>>>>>> I’m a bit surprised I don’t recall EOF style solutions for such things 
>>>>>> and maybe the Amazon RDS database has a shared connection pattern the 
>>>>>> apps can use, I didn’t see anything so I figure this is application 
>>>>>> level stuff.
>>>>>> 
>>>>>> Thoughts? Suggestions?
>>>>>> _______________________________________________
>>>>>> Do not post admin requests to the list. They will be ignored.
>>>>>> Webobjects-dev mailing list      (Webobjects-dev@lists.apple.com)
>>>>>> Help/Unsubscribe/Update your Subscription:
>>>>>> https://lists.apple.com/mailman/options/webobjects-dev/samuel%40samkar.com
>>>>>> 
>>>>>> This email sent to sam...@samkar.com
>>>>> 
>>>> 
>>> 
>> 
> 

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list      (Webobjects-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to