Answered over on SO
On Tue, May 23, 2017 at 3:34 PM, Barrett Strausser
wrote:
> Crossposted to SO
>
> https://stackoverflow.com/questions/44144925/apache-
> phoenix-current-time-gives-npe
>
Crossposted to SO
https://stackoverflow.com/questions/44144925/apache-phoenix-current-time-gives-npe
FWIW, we're exposing a way to do snapshot reads (PHOENIX-3744), starting
with our MR integration (on top of which the Spark integration is built)
for our 4.11 release. This is about as close as you can get to reading HDFS
directly while still taking into account non flushed HBase data.
Thanks,
Jam
No nothing in particular. just was looking if there was a way. employ
spark plugin seems to be the standard.
thank you so much for your input.
On Tue, May 23, 2017 at 4:01 PM, Jonathan Leech wrote:
> There is a Phoenix / mapreduce integration. If you bypass Hbase you will
> need to take care
There is a Phoenix / mapreduce integration. If you bypass Hbase you will need
to take care to not miss edits that are only in memory and WAL.
If you bypass both Phoenix and Hbase you will have to write code that can
interpret both...Possible, yes, but not a good use of your time.
Is there some
Thanks Jonathan. ..
But am looking to access data directly from HDFS. not go through
phoenix/hbase fir access.
Is this possible?
Best regards
On May 23, 2017 3:35 PM, "Jonathan Leech" wrote:
I think you would use Spark for that, via the Phoenix spark plugin.
> On May 23, 2017, at 12:24 PM,
I think you would use Spark for that, via the Phoenix spark plugin.
> On May 23, 2017, at 12:24 PM, Ash N <742...@gmail.com> wrote:
>
> Hi All,
>
> This may be a silly question. we are storing data through Apache Phoenix.
> Is there anything special we have to do so that machine learning and ot
Hi All,
This may be a silly question. we are storing data through Apache Phoenix.
Is there anything special we have to do so that machine learning and other
analytics workloads can access this data from HDFS layer?
Considering HBase stores data in HDFS.
thanks,
-ash
I think you need to run the tool as "hbase" user.
On Tue, May 23, 2017 at 5:43 AM, cmbendre
wrote:
> I created an ASYNC index and ran the IndexTool Map-Reduce job to populate
> it.
> Here is the command i used
>
> hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table MYTABLE
> --index-
I created an ASYNC index and ran the IndexTool Map-Reduce job to populate it.
Here is the command i used
hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table MYTABLE
--index-table MYTABLE_GLOBAL_INDEX --output-path MYTABLE_GLOBAL_INDEX_HFILE
I can see that index HFiles are created succ
10 matches
Mail list logo