..and you can use something like Avro<http://avro.apache.org/docs/1.7.6/api/cpp/html/index.html>or ProtoBuf <https://developers.google.com/protocol-buffers/docs/cpptutorial>for serialization
On Sat, May 10, 2014 at 11:33 PM, alex kamil <[email protected]> wrote: > Unilocal, > > you can also decouple hbase from your app by streaming data via a simple > unix pipe, unix socket or tcp socket, > just create a socket listener in Java on hbase side (which uses phoenix > JDBC to insert data into hbase) > > Alex > > > On Thu, May 8, 2014 at 2:31 AM, James Taylor <[email protected]>wrote: > >> Hi Unilocal, >> Yes, both salting and secondary indexing rely on the Phoenix client in >> cooperation with the server. >> >> Would it be possible for the C++ server to generate CSV files instead? >> Then these could be pumped into Phoenix through our CSV bulk loader (which >> could potentially be invoked through a variety of ways). Another >> alternative may be through our Apache Pig integration. Or it'd be pretty >> easy to adapt our Pig store func to a Hive SerDe. Then you could use the >> Hive ODBC driver to pump in data that's formated in a Phoenix compliant >> manner. >> >> If none of these are options, you could pump into a Phoenix table and >> then transfer the data (using Phoenix APIs) through UPSERT SELECT into a >> salted table or a table with secondary indexes. >> >> Thanks, >> James >> >> >> On Mon, May 5, 2014 at 2:42 PM, Localhost shell < >> [email protected]> wrote: >> >>> Hey Folks, >>> >>> I have a use case where one of the apps(C++ server) will pump data into >>> the Hbase. >>> Since Phoenix doesn't support ODBC api's so the app will not be able to >>> use the Phoenix JDBC api and will use Hbase thirft api to insert the data. >>> Note: The app that is inserting data will create the row keys similar to >>> the way how Phoenix JDBC creates. >>> >>> Currently no data resides in Hbase and the table will be freshly created >>> using the SQL commands (using phoenix sqlline). >>> All the analysis/group-by queries will be triggered by a different app >>> using the Phoenix JDBC API's. >>> >>> In the above mention scenario, Are there any Phoenix functionalities ( >>> for ex: Salting, Secondary indexing) that will not be available because >>> Phoenix JDBC driver is not used for inserting data. >>> >>> Can someone please share their thoughts on this? >>> >>> Hadoop Distro: CDH5 >>> HBase: 0.96.1 >>> >>> --Unilocal >>> >>> >>> >> >> >
