store ROW_TIMESTAMP as TIMESTAMP in nanoseconds
Hi Team, I would like to store ROW_TIMESTAMP as TIMESTAMP in nanoseconds, what is the best way to generate Timestamp with nanoseconds and when I query using ROW_TIMESTAMP stored in nanoseconds do I loose the precision to milliseconds. Thanks
Re: optimistic lock in phenix
Another option is to upgrade to 4.8 or later and use transactions (which uses optimistic concurrency under the covers): https://phoenix.apache.org/transactions.html Thanks, James On Wed, Aug 30, 2017 at 9:53 AM, James Taylorwrote: > Upgrade to 4.9 or later and use our atomic upsert command: http://phoenix. > apache.org/atomic_upsert.html > > On Wed, Aug 30, 2017 at 9:38 AM Pradheep Shanmugam < > pradheep.shanmu...@infor.com> wrote: > >> hi, >> >> >> we have a table in phoenix 4.4 which has a modifyrevision as a column.. >> >> when a thread updates the row the modifyrevision has to be incremented.. >> >> only option i can see is to use the upsert select to get the latest >> revision but since it is not atomic how do we avoid the dirty reads?is >> there any other way this can be done? >> >> >> Thanks, >> >> Pradheep >> >
Re: optimistic lock in phenix
Upgrade to 4.9 or later and use our atomic upsert command: http://phoenix.apache.org/atomic_upsert.html On Wed, Aug 30, 2017 at 9:38 AM Pradheep Shanmugam < pradheep.shanmu...@infor.com> wrote: > hi, > > > we have a table in phoenix 4.4 which has a modifyrevision as a column.. > > when a thread updates the row the modifyrevision has to be incremented.. > > only option i can see is to use the upsert select to get the latest > revision but since it is not atomic how do we avoid the dirty reads?is > there any other way this can be done? > > > Thanks, > > Pradheep >
optimistic lock in phenix
hi, we have a table in phoenix 4.4 which has a modifyrevision as a column.. when a thread updates the row the modifyrevision has to be incremented.. only option i can see is to use the upsert select to get the latest revision but since it is not atomic how do we avoid the dirty reads?is there any other way this can be done? Thanks, Pradheep
Use Phoenix hints with Spark Integration [main use case: block cache disable]
Hello folks, I'm facing the issue of disabling adding to the block cache records I'm selecting from my Spark application when reading as DataFrame (e.g. sqlContext.phoenixTableAsDataFrame(myTable, myColumns, myPredicate, myZkUrl, myConf). I know I can force the no cache on a query basis when issuing SQL queries leveraging the /*+ NO_CACHE */ hint. I know I can disable the caching at a table-specific or colum-family specific basis through an ALTER TABLE HBase shell command. What I don't know is how to do so when leveraging Phoenix-Spark APIs. I think my problem can be stated as a more general purpose question: *how can Phoenix hints be specified when using Phoenix-Spark APIs? *For my specific use case, I tried to push within a Configuration object the property /hfile.block.cache.size=0/ before creating the DataFrame but I realized records resulting from the underneath scan where still cached. Thank you in advance for your help. Best regards, Roberto