On 4 May 2011 08:03, Mark Diggory <mdigg...@atmire.com> wrote:

> I think we should work to keep the Data Access Layer and the Business Logic
> Layer separate.
>

I agree with that principle, but...


> I see the Event Manager as Business Logic layer and the DAO as Persistence
> layer.  If you need to enforce integrity for an Addon that works very
> closely with the database, then your more than likely working in the Data
> Access Layer.
>

Event Manager as it stands is a Business Logic thing. But really, a
messaging bus is system infrastructure rather than being a particular layer
of anything (although it's typically used to orchestrate business
components).

But there is a bigger issue with how you choose to integrate Addons. Yes, an
addon may have it's own data access. But it's highly unlikely that it would
have data access that is not related to it's own business logic.

So what are you going to have?

a) stack / intercept DAOs for your integration that then execute business
logic to do data access as a result? Sounds like a big conflation of layers
there.

b) integration at the business layer to process the model, creating data
which is ('magically'?) stored / retrieved by an integration at the data
access layer

If you want reliable [and easy to use] addons, then you need to minimize the
number of integrations with the core of DSpace. Which more likely than not
means you should be integrating at the layer of business logic - with your
addon then driving it's own, separate [although potentially still within the
same schema] data access layer.


If your running a secondary service that needs to be informed about changes
> in the Domain Model after their persistence has been successfully completed,
> then the EventManager / Service is appropriate.
>

As it is currently coded, yes. That does not mean that the same principle of
configuring notification to locally provided code through a well defined api
can't work equally as well for code where that notification has occurred
prior to the persistence.

And note that I'm not even saying that the data access layer is what is
providing the notification. Potentially, it could be - but you still need
business logic that initiates / calls down to the data access layer, and
it's equally valid to have the notification triggered within the business
logic before / around the call to the data layer.

When do you need your addon to be tied to the transactional window, you need
> it when the Addons data is considered "as mission critical" as that of the
> core data model.
>

Mission critical is not the same thing as having hard ties to the data model
and potentially making it very tricky to upgrade DSpace.

 For instance, in the Versioning prototype I am working on, it needs to be
> closely tied to the success of the persistence of the DSpaceObject,  If
> there is a problem in the versioning, we need to roll back the whole
> transaction.  We should be looking to tie the Versioning logic into the
> Persistence tier, and, its highly likely that the versioning will be tied to
> the underlying persistence anyhow, we will implement versioning in a legacy
> dspace rdbms quite differently than we would in a Fedora repository.
>

Interesting, but also extremely scary. The only implementation of versioning
within a repository that I personally support has separate objects that have
relationships established via their metadata. There is some complication
there as to how you index that content for presentation (ie. avoiding
duplicate entries on browse / search, etc.), but essentially the entire
implementation of that is/can be supported purely within the business logic
without any changes at the data access layer.

Even going back to the discussions that we had in the DSpace Architecture
Review, the versioning that we drew up there revolved around a 'resource
map' of the item, updating that map to add/remove/replace the changing
nodes. Although that modifications really should be done as a core change
[ie. is part of trunk].


> For example, the following contextservice can be looked up if access to the
> current context is required, but it is not necessary for calling code to get
> it directly when executing DAO call.
>

>
> http://scm.dspace.org/svn/repo/modules/dspace-core/trunk/impl/src/main/java/org/dspace/core/LegacyContextService.java
>
> DSpace dspace = new DSpace();
> CollectionService cs = dspace.getSingletonService(CollectionService.class);
> Collection col = cs.find(id);
> col.setName("Some text");
> cs.update(col);
>
>
> Behind the CollectionService, it will pickup the current Context from the
> request when necessary and provide the commit, as opposed to
>
> Context context = new Context();
> Collection col = Collection.find(context, id);
> col.setName("Some text.");
> col.update();
> context.commit();
>
>
> So if you want to introduce additional tasks that need to occur in the
> request cycle, they are added as RequestInterceptors that happen prior to
> Context Intercepter being closed.
>

So what is the transaction boundary in the first example? In the second it's
quite clear you are creating the context, doing some work and committing it.

In the first case, it's not clear where the transaction starts (it could be
in the creation of the DSpace object, or it could obtain an existing
context). But importantly, there is nothing to trigger the transaction being
committed / closed.

It's also quite horrible that you are going through a proxy to get to Spring
managed bean, when you should have just had it injected in the first place.

Note that I'm not particularly worried about having explicit transactions in
the code - it would be quite reasonable to have code that simply gets
injected with a service / data access object and that picks up connections /
transactions automatically as configured within Spring (and outside of the
code). But this isn't using commonly understood Spring transaction
mechanisms, and it's (apparently) just burying arcane operations within
custom code.

Which means that the Usage and Context Events would be merged into one
> infrastructure. I think its wise to clearly separate the systems that are
> within transaction (and thus need to be responsible enough to participate in
> the transaction) from those that simply monitor the application and process
> change/read events. It will be clearer to other developers to keep the two
> separate
>

To be honest, I don't think any developer should ever be that relaxed about
how they handle conditions that would affect transactions. There are always
knock-on implications (eg. does each call to a listener for an
'out-of-context' event get it's own context)? Obviously, we should seek to
provide as much protection for the system as we can from badly behaving
listeners, but we should still be advocating that they are always written
responsibly enough that they could participate in the transaction.


> And we can still use foreign key constraints in the 'immediate execution'
> consumers. But, for any add ons, I would strongly argue that you should not
> use foreign key constraints. Any add ins that start imposing database
> integrity constraints are going to cause havoc during upgrades if we want /
> need to make changes to the core model.
>
>
> Yes and No. When the implementation its closely tied to the DAO
> implementation, you will want to be able to use the actual capabilities of
> the persistence platform to make your addon more efficient.
>

Depends on implementation, but foreign key constraints on tables probably
more work for the database than doing it yourself. Shifting processing from
the application server to the database might be more efficient for
constrained resources, but it's easier and cheaper to scale the application
server processing.

The Stackable DAO is more than an immediate in-process consumer by design,
> its specifically targeting being a member of the call stack for the DAO
> method in question (ItemDAO.delete(), CollectionDAO.update()) and as such,
> there's no need to be messing about with "extraneous mapping of Events to
> consumers and all the extra code associated with that".  Its less fragile
> and rigid than writing an extensive messaging framework to handle processing
> the transaction that has little or no specification other than its own
> obtuse internal business logic.
>

I worry more about having MyItemDAO.update(), that actually ends up doing
anything but updating an item. The work it's doing is not reflected by the
contract that it's masquerading behind, which doesn't help anyone.


>
> It is just easier to comprehend if one wired the members of the call stack
> directly in Spring
>
> ItemDAO.update() --> MyItemDAO.update() --> YourItemDAO.update();
>
> ItemDAO.delete() --> MyItemDAO.delete() --> YourItemDAO.delete();
>
> CollectionDAO.delete() --> MyCollectionDAO.delete() -->
> YourCollectionDAO.delete();
>
> With this at least you can predict the call stack in your configuration
>
> As opposed to
>
> Item.update() --> Context.addEvent(event) --> Context.commit() -->
> EventManager.dispatchEvent(event)  -->  Dispatcher.callConsumer(event)  -->
> Consumer.consume(event) --> verbose conditional switch logic to figure out
> what the hell generated the event and what to do with it now.
>
> Which looks clearer to you?
>

Right, that's how it operates currently with delayed events. But with
synchronous comms, it would just be:

Item.update()
   -> eventPublisher.publishEvent(new ItemEvent(this))
      -> Spring magic
         -> MyListener.onApplicationEvent()

And yes, Spring has an ApplicationEvent mechanism within it for synchronous
messaging, which can be further tied to the Spring Integration codebase for
asynchronous messaging.


More directly answering your question, I'm not clear if the stackable dao
should be

ItemDAO.delete() --> MyItemDAO.delete() --> YourItemDAO.delete()

or

YourItemDAO.delete() -> MyItemDAO.delete() -> ItemDAO.delete()

Really, it should probably be:

ItemDAO.create() -> MyItemDAO.create() -> YourItemDAO.create()

and

YourItemDAO.delete() -> MyItemDAO.delete() -> ItemDAO.delete()

And in the case of an update... well, depending on what has changed about an
Item, and what foreign key references you've made, you might need your DAO
to execute either before or after [or both] the main update to the actual
Item.


> Bear in mind that the messaging itself originated under a proposal to
> introduce asynchronous JMS style event Queues and I haven't a clue how we'd
> maintain a transactional system across asynchronous queuing. Nor to I even
> want to think about it. The EventManagers already a complex beast, please,
> lets not make it more complex.
>

There are ways of doing distributed transactions, but that's probably more
trouble than it's worth. It's not an issue though - no reason why you can't
decree that if the 'event' is processed synchronously / within the
transaction, that it has to be executed with the same JVM/thread. And if you
have an asynchronous event, then it can't / won't get to participate in the
transaction.

G
------------------------------------------------------------------------------
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
_______________________________________________
Dspace-devel mailing list
Dspace-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dspace-devel

Reply via email to