Hello,

Sorry for the delayed reply.

The way falcon replication works in this case is that it first exports the
table to a staging location and then it distcp to target cluster and then
it does an import. So if any updates happen after the export then they
won't be a part of the copy in the target.



On Thu, Mar 10, 2016 at 11:01 AM, PG User <[email protected]> wrote:

> Hi All,
> I have question regarding hive replication which is done in
> https://issues.apache.org/jira/browse/FALCON-93
>
> I tried to read the code, but fail to understand how this handle atomic
> replication.
> I see it uses read-only catalog storage. I am fairly new to Falcon.
>
> If one transaction is writing data to a table and we want to replicate it,
> how does it ensures atomicity? Can someone please explain me?
>
> Thanking you.
>
> - PG User
>

Reply via email to