[ 
https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16425019#comment-16425019
 ] 

stack edited comment on HBASE-20188 at 4/4/18 5:14 AM:
-------------------------------------------------------

[~ram_krish] Setting this on the client-side

<property>
  <name>dfs.domain.socket.path</name>
  <value>/home/stack/sockets/stack_dn_socket</value>
  <description>
    This configuration parameter turns on short-circuit local reads.
  </description>

See paragraph above and the flamegraphs for diff between hbase1 and hbase2 w/o 
the above.

dfs.client.read.shortcircuit.skip.checksum makes sense. Let me try it here and 
see if it helps. Let me add it to the doc over on HBASE-20337.

Do you recall what prompted your upping of the  
'dfs.client.read.shortcircuit.streams.cache.size' and 
'dfs.client.socketcache.capacity' values? Lets get that into HBASE-20337 too.

You have "... We have done some detalied study on the effect of short circuit 
reads and have our analysis on it." Is it available anywhere boss?


was (Author: stack):
[~ram_krish] Setting this on the client-side

<property>
  <name>dfs.domain.socket.path</name>
  <value>/home/stack/sockets/stack_dn_socket</value>
  <description>
    This configuration parameter turns on short-circuit local reads.
  </description>

dfs.client.read.shortcircuit.skip.checksum makes sense. Let me try it here and 
see if it helps. Let me add it to the doc over on HBASE-20337.

Do you recall what prompted your upping of the  
'dfs.client.read.shortcircuit.streams.cache.size' and 
'dfs.client.socketcache.capacity' values? Lets get that into HBASE-20337 too.

You have "... We have done some detalied study on the effect of short circuit 
reads and have our analysis on it." Is it available anywhere boss?

> [TESTING] Performance
> ---------------------
>
>                 Key: HBASE-20188
>                 URL: https://issues.apache.org/jira/browse/HBASE-20188
>             Project: HBase
>          Issue Type: Umbrella
>          Components: Performance
>            Reporter: stack
>            Assignee: stack
>            Priority: Blocker
>             Fix For: 2.0.0
>
>         Attachments: CAM-CONFIG-V01.patch, HBASE-20188.sh, HBase 2.0 
> performance evaluation - Basic vs None_ system settings.pdf, 
> ITBLL2.5B_1.2.7vs2.0.0_cpu.png, ITBLL2.5B_1.2.7vs2.0.0_gctime.png, 
> ITBLL2.5B_1.2.7vs2.0.0_iops.png, ITBLL2.5B_1.2.7vs2.0.0_load.png, 
> ITBLL2.5B_1.2.7vs2.0.0_memheap.png, ITBLL2.5B_1.2.7vs2.0.0_memstore.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, 
> YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png, 
> YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, 
> flamegraph-1072.1.svg, flamegraph-1072.2.svg, 
> lock.127.workloadc.20180402T200918Z.svg, 
> lock.2.memsize2.c.20180403T160257Z.svg, tree.txt
>
>
> How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor 
> that it is much slower, that the problem is the asyncwal writing. Does 
> in-memory compaction slow us down or speed us up? What happens when you 
> enable offheaping?
> Keep notes here in this umbrella issue. Need to be able to say something 
> about perf when 2.0.0 ships.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to