Re: How many servers are need to put Phoenix in production?

2016-12-27 Thread Cheyenne Forbes
are there any recommended specs for the servers?

Re: slow response on large # of columns

2016-12-27 Thread Mark Heppner
If you don't need to query any of the 3600 columns, you could even just use JSON inside of a VARCHAR field. On Mon, Dec 26, 2016 at 2:25 AM, Arvind S wrote: > Setup .. > hbase (1.1.2.2.4) cluster on azure with 1 Region server. (8core 28 gb ram > ..~16gb RS heap) > phoenix

Re: slow response on large # of columns

2016-12-27 Thread Jonathan Leech
I would try an array for that use case. From my experience in hbase for the execution time querying the same data, more rows > more columns > fewer columns. Also note that running the query in Phoenix it creates a plan every time, and the number of columns might matter there. Also the sqlline

Re: Shared Multi-Tenant Views

2016-12-27 Thread James Taylor
Hi Jeremy, Please file a JIRA for VIEW_INDEX_ID to be an int. The code would just need to be tolerant when reading the data if the length is two byte short versus four byte int. At write time, we'd just always write an int. If you could put together a patch that maintains backward compatibility,

Re: slow response on large # of columns

2016-12-27 Thread Josh Elser
Maybe you could separate some of the columns into separate column families so you have some physical partitioning on disk? Whether you select one or many columns, you presently have to read through each column on disk. AFAIK, there shouldn't really be an upper limit here (in terms of what

Re: How many servers are need to put Phoenix in production?

2016-12-27 Thread Mark Heppner
I don't think anyone will be able to tell you to use a specific number of servers. A general rule to follow is to put an HBase RegionServer on each of your HDFS DataNodes. On Mon, Dec 26, 2016 at 5:52 AM, Cheyenne Forbes < cheyenne.osanu.for...@gmail.com> wrote: > When I say how many I mean,