@Ashutosh: Please dont cross post. Please use relevant thread.

@zzd7zzd: In that case, in your app, can you check whether you are seeing
the updated values for the property you modified?
PS: In native hbase api, i would do conf.get($propery_name). I am not sure
how to do this in Phoenix.

On Sat, Sep 19, 2015 at 8:49 AM, Ashutosh Sharma <
ashu.sharma.in...@gmail.com> wrote:

> *Problem is resolved now.*
> It was class file version mismatch due to some conflicting version jars....
>
> Followed all these links thoroughly:
> follow these links:
> https://phoenix.apache.org/installation.html
>
> https://phoenix.apache.org/Phoenix-in-15-minutes-or-less.html
>
>
> https://phoenix.apache.org/faq.html#I_want_to_get_started_Is_there_a_Phoenix_Hello_World
>
> Created a brand new Eclipse workspace and then successfully executed this
> one:
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.ResultSet;
> import java.sql.SQLException;
> import java.sql.PreparedStatement;
> import java.sql.Statement;
>
>
> //Folllow this one:
> //
> https://phoenix.apache.org/faq.html#I_want_to_get_started_Is_there_a_Phoenix_Hello_World
> public class TestPhoenix {
>
> public static void main(String[] args) throws SQLException {
> Statement stmt = null;
> ResultSet rset = null;
> Connection con = DriverManager.getConnection("jdbc:phoenix:localhost");
> stmt = con.createStatement();
> //The below lines are commented as the table already exists in the DB
> /*
> stmt.executeUpdate("create table test (mykey integer not null primary key,
> mycolumn varchar)");
> stmt.executeUpdate("upsert into test values (1,'Hello')");
> stmt.executeUpdate("upsert into test values (2,'World!')");
> con.commit();*/
> PreparedStatement statement = con.prepareStatement("select * from test");
> rset = statement.executeQuery();
> while (rset.next()) {
> System.out.println(rset.getString("mycolumn"));
> }
> //Add some more rows for testing
> stmt.executeUpdate("upsert into test values (3,'Ashu')");
> stmt.executeUpdate("upsert into test values (4,'Sharma')");
> stmt.executeUpdate("upsert into test values (5,'Ayush')");
> stmt.executeUpdate("upsert into test values (6,'Shivam')");
> con.commit();
> //Now read it further
> rset = statement.executeQuery();
> while (rset.next()) {
> System.out.println(rset.getString("mycolumn"));
> }
> statement.close();
> con.close();
> }
> }
>
>
> Working fine. Only Phoenix client JAR is needed...nothing more than that.
> Few questions, I can see that table that i created using Phoenix is also
> created into HBase. But how they are working internally....means if any
> update happens at Hbase side...are they reflected at Phoenix side or not?
> and vice versa....
>
>
> On Sat, Sep 19, 2015 at 8:23 AM, anil gupta <anilgupt...@gmail.com> wrote:
>
>> Please make sure that hbase-site.xml is in classpath of your app.
>>
>> On Sat, Sep 19, 2015 at 6:32 AM, zz d <zzd7...@gmail.com> wrote:
>>
>>> Version: phoenix-4.5.0-HBase-0.98
>>>
>>> Program Exception:
>>>
>>> ```
>>> java.lang.OutOfMemoryError: unable to create new native thread
>>>         at java.lang.Thread.start0(Native Method)
>>>         at java.lang.Thread.start(Thread.java:714)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360)
>>>         at
>>> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
>>>         at
>>> org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1625)
>>>         at
>>> org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1598)
>>>         at
>>> org.apache.phoenix.cache.ServerCacheClient.removeServerCache(ServerCacheClient.java:308)
>>>         at
>>> org.apache.phoenix.cache.ServerCacheClient.access$000(ServerCacheClient.java:82)
>>> ```
>>>
>>> I found that the program had created too many threads.
>>>
>>> I read the HBase code and found the max threads number is determined by
>>> `hbase.htable.threads.max`
>>>
>>> ```
>>> public static ThreadPoolExecutor getDefaultExecutor(Configuration conf) {
>>>     int maxThreads = conf.getInt("hbase.htable.threads.max",
>>> Integer.MAX_VALUE);
>>>     if (maxThreads == 0) {
>>>       maxThreads = 1; // is there a better default?
>>>     }
>>>     long keepAliveTime =
>>> conf.getLong("hbase.htable.threads.keepalivetime", 60);
>>>
>>>     // Using the "direct handoff" approach, new threads will only be
>>> created
>>>     // if it is necessary and will grow unbounded. This could be bad but
>>> in HCM
>>>     // we only create as many Runnables as there are region servers. It
>>> means
>>>     // it also scales when new region servers are added.
>>>     ThreadPoolExecutor pool = new ThreadPoolExecutor(1, maxThreads,
>>> keepAliveTime, TimeUnit.SECONDS,
>>>         new SynchronousQueue<Runnable>(),
>>> Threads.newDaemonThreadFactory("htable"));
>>>     ((ThreadPoolExecutor) pool).allowCoreThreadTimeOut(true);
>>>     return pool;
>>>   }
>>> ```
>>>
>>> This parameter can be found in phoenix code and its document here:
>>> https://phoenix.apache.org/secondary_indexingha.html
>>>
>>> I set the parameter in `hbase-site.xml` and restart the hbase. I also
>>> use the `hbase-site.xml` in client side, but the threads number in my
>>> client do not reduce.
>>>
>>> How can I control the threads in client?
>>>
>>> Thanks !
>>>
>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>
>
> --
> With best Regards:
> Ashutosh Sharma
>



-- 
Thanks & Regards,
Anil Gupta

Reply via email to