[jira] [Commented] (HBASE-16804) JavaHBaseContext.streamBulkGet is void but must be JavaDStream

2016-10-10 Thread Igor Yurinok (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15564123#comment-15564123
 ] 

Igor Yurinok commented on HBASE-16804:
--

Added

> JavaHBaseContext.streamBulkGet is void but must be JavaDStream 
> ---
>
> Key: HBASE-16804
> URL: https://issues.apache.org/jira/browse/HBASE-16804
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.0.0
>Reporter: Igor Yurinok
>  Labels: beginner, spark
> Attachments: HBASE-16804.patch
>
>
> This is current implementation in JavaHBaseContext.scala:
> {code}
> def streamBulkGet[T, U](tableName: TableName,
>   batchSize: Integer,
>   javaDStream: JavaDStream[T],
>   makeGet: Function[T, Get],
>   convertResult: Function[Result, U])
> {code}
> Should be:
> {code}
> def streamBulkGet[T, U](tableName: TableName,
>   batchSize: Integer,
>   javaDStream: JavaDStream[T],
>   makeGet: Function[T, Get],
>   convertResult: Function[Result, U]): JavaDStream[U]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16804) JavaHBaseContext.streamBulkGet is void but must be JavaDStream

2016-10-10 Thread Igor Yurinok (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Yurinok updated HBASE-16804:
-
Attachment: HBASE-16804.patch

> JavaHBaseContext.streamBulkGet is void but must be JavaDStream 
> ---
>
> Key: HBASE-16804
> URL: https://issues.apache.org/jira/browse/HBASE-16804
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.0.0
>Reporter: Igor Yurinok
>  Labels: beginner, spark
> Attachments: HBASE-16804.patch
>
>
> This is current implementation in JavaHBaseContext.scala:
> {code}
> def streamBulkGet[T, U](tableName: TableName,
>   batchSize: Integer,
>   javaDStream: JavaDStream[T],
>   makeGet: Function[T, Get],
>   convertResult: Function[Result, U])
> {code}
> Should be:
> {code}
> def streamBulkGet[T, U](tableName: TableName,
>   batchSize: Integer,
>   javaDStream: JavaDStream[T],
>   makeGet: Function[T, Get],
>   convertResult: Function[Result, U]): JavaDStream[U]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16804) JavaHBaseContext.streamBulkGet is void but must be JavaDStream

2016-10-10 Thread Igor Yurinok (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Yurinok updated HBASE-16804:
-
Labels: beginner spark  (was: )

> JavaHBaseContext.streamBulkGet is void but must be JavaDStream 
> ---
>
> Key: HBASE-16804
> URL: https://issues.apache.org/jira/browse/HBASE-16804
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.0.0
>Reporter: Igor Yurinok
>  Labels: beginner, spark
>
> This is current implementation in JavaHBaseContext.scala:
> {code}
> def streamBulkGet[T, U](tableName: TableName,
>   batchSize: Integer,
>   javaDStream: JavaDStream[T],
>   makeGet: Function[T, Get],
>   convertResult: Function[Result, U])
> {code}
> Should be:
> {code}
> def streamBulkGet[T, U](tableName: TableName,
>   batchSize: Integer,
>   javaDStream: JavaDStream[T],
>   makeGet: Function[T, Get],
>   convertResult: Function[Result, U]): JavaDStream[U]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16804) JavaHBaseContext.streamBulkGet is void but must be JavaDStream

2016-10-10 Thread Igor Yurinok (JIRA)
Igor Yurinok created HBASE-16804:


 Summary: JavaHBaseContext.streamBulkGet is void but must be 
JavaDStream 
 Key: HBASE-16804
 URL: https://issues.apache.org/jira/browse/HBASE-16804
 Project: HBase
  Issue Type: Bug
  Components: spark
Affects Versions: 2.0.0
Reporter: Igor Yurinok


This is current implementation in JavaHBaseContext.scala:
{code}
def streamBulkGet[T, U](tableName: TableName,
  batchSize: Integer,
  javaDStream: JavaDStream[T],
  makeGet: Function[T, Get],
  convertResult: Function[Result, U])
{code}
Should be:
{code}
def streamBulkGet[T, U](tableName: TableName,
  batchSize: Integer,
  javaDStream: JavaDStream[T],
  makeGet: Function[T, Get],
  convertResult: Function[Result, U]): JavaDStream[U]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6956) Do not return back to HTablePool closed connections

2012-11-30 Thread Igor Yurinok (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507353#comment-13507353
 ] 

Igor Yurinok commented on HBASE-6956:
-

We implemented our own HTableFactory and HTable which able to check whether 
connection closed:
{code}
public class HumbleHTableFactory implements HTableInterfaceFactory {

@Override
public HTableInterface createHTableInterface(Configuration config, byte[] 
tableName) {
try {
return new HumbleHTable(config, tableName);
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
}

@Override
public void releaseHTableInterface(HTableInterface table) {
try {
table.close();
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
}

}
{code}
{code}
public class HumbleHTable extends HTable {

private static final Log log = LogFactory.getLog(HumbleHTable.class);

public HumbleHTable(String tableName) throws IOException {
super(tableName);
}

public HumbleHTable(byte[] tableName) throws IOException {
super(tableName);
}

public HumbleHTable(Configuration conf, String tableName) throws 
IOException {
super(conf, tableName);
}

public HumbleHTable(Configuration conf, byte[] tableName) throws 
IOException {
super(conf, tableName);
}

/**
 * Harmless clean-up - HConnection isn't touched. Only the executor pool is 
shut down
 *
 * @throws IOException
 */
@Override
public void close() throws IOException {
if (isClosed()) {
return;
}
flushCommits();
ExecutorService pool = getPool();
if (pool != null) {
pool.shutdown();
}
setClosed();
}

private boolean isClosed() {
try {
return getSuperField("closed").getBoolean(this);
} catch (Exception e) {
log.error(e);
return false;
}
}

private void setClosed() {
try {
getSuperField("closed").setBoolean(this, true);
} catch (Exception e) {
log.error(e);
}
}

private ExecutorService getPool() {
try {
return (ExecutorService) getSuperField("pool").get(this);
} catch (Exception e) {
log.error(e);
return null;
}
}

/**
 * Loads stuff from the parent class via reflection
 */
private Field getSuperField(String fieldName) throws NoSuchFieldException {
Field field = HTable.class.getDeclaredField(fieldName);
field.setAccessible(true);
return field;
}

}
{code}

> Do not return back to HTablePool closed connections
> ---
>
> Key: HBASE-6956
> URL: https://issues.apache.org/jira/browse/HBASE-6956
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.90.6
>Reporter: Igor Yurinok
>
> Sometimes we see a lot of Exception about closed connections:
> {code}
>  
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@553fd068
>  closed
> org.apache.hadoop.hbase.client.ClosedConnectionException: 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@553fd068
>  closed
> {code}
> After investigation we assumed that it occurs because closed connection 
> returns back into HTablePool. 
> For our opinion best solution is  check whether the table is closed in method 
> HTablePool.putTable and if true don't add it into the queue and release such 
> HTableInterface.
> But unfortunatly right now there are no access to HTable#closed field through 
> HTableInterface

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6956) Do not return back to HTablePool closed connections

2012-10-05 Thread Igor Yurinok (JIRA)
Igor Yurinok created HBASE-6956:
---

 Summary: Do not return back to HTablePool closed connections
 Key: HBASE-6956
 URL: https://issues.apache.org/jira/browse/HBASE-6956
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.90.6
Reporter: Igor Yurinok


Sometimes we see a lot of Exception about closed connections:
{code}
 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@553fd068
 closed
org.apache.hadoop.hbase.client.ClosedConnectionException: 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@553fd068
 closed
{code}

After investigation we assumed that it occurs because closed connection returns 
back into HTablePool. 

For our opinion best solution is  check whether the table is closed in method 
HTablePool.putTable and if true don't add it into the queue and release such 
HTableInterface.

But unfortunatly right now there are no access to HTable#closed field through 
HTableInterface

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira