[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156350#comment-14156350
 ] 

Hudson commented on HDFS-7129:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7129. Metrics to track usage of memory for writes. (Contributed by Xiaoyu 
Yao) (arp: rev 5e8b6973527e5f714652641ed95e8a4509e18cfa)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/JMXGet.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch, HDFS-7129.2.patch, 
 HDFS-7129.3.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156428#comment-14156428
 ] 

Hudson commented on HDFS-7129:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7129. Metrics to track usage of memory for writes. (Contributed by Xiaoyu 
Yao) (arp: rev 5e8b6973527e5f714652641ed95e8a4509e18cfa)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/JMXGet.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java


 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch, HDFS-7129.2.patch, 
 HDFS-7129.3.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156544#comment-14156544
 ] 

Hudson commented on HDFS-7129:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-7129. Metrics to track usage of memory for writes. (Contributed by Xiaoyu 
Yao) (arp: rev 5e8b6973527e5f714652641ed95e8a4509e18cfa)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/JMXGet.java


 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch, HDFS-7129.2.patch, 
 HDFS-7129.3.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14155019#comment-14155019
 ] 

Hudson commented on HDFS-7129:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6163 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6163/])
HDFS-7129. Metrics to track usage of memory for writes. (Contributed by Xiaoyu 
Yao) (arp: rev 5e8b6973527e5f714652641ed95e8a4509e18cfa)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/JMXGet.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Fix For: HDFS-6581

 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch, HDFS-7129.2.patch, 
 HDFS-7129.3.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-09-30 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14152913#comment-14152913
 ] 

Arpit Agarwal commented on HDFS-7129:
-

+1 for the patch. I will commit it shortly.

Two nitpicks we can clean up later:
# {{if (replicaInfo.getIsPersisted() ==  false)}} can just be written as {{if 
(!replicaInfo.getIsPersisted()}}.
# We can eliminate {{FsDatasetImpl.discardRamDiskReplica}} since it just 
forwards to {{ramDiskReplicaTracker.discardReplica}}.

 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch, HDFS-7129.2.patch, 
 HDFS-7129.3.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-09-29 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14152017#comment-14152017
 ] 

Arpit Agarwal commented on HDFS-7129:
-

Hi Xiaoyu, thanks for taking up this task! Nice updates to the test cases. 

Few comments below:
# We should not increment addRamDiskBytesWrite by block length in createRbw 
since at this point we are just reserving space for the block. No data has been 
written. This metric can be incremented in finalizeReplica when the final block 
length is known.
# Similarly for addRamDiskBytesWriteFallback. This metric can be hard to track 
since we'd need to keep state until finalize that the write was originally 
requested on RAM Disk. We can skip this metric and just have 
incrRamDiskBlocksWriteFallback. 
# When you call incrRamDiskBytesLazyPersisted, you can avoid the system call to 
get the file length. It is available as {{replicaInfo#getNumBytes}}
# RamDiskReplica#numReads could be renamed to numAccesses.
# ramDiskBlocksEvictionWindow and ramDiskBlocksLazyPersistWindow should be 
MutableQuantile.

Looks good otherwise.

 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-09-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14152215#comment-14152215
 ] 

Chris Nauroth commented on HDFS-7129:
-

Nice work, Xiaoyu.  Just a couple more comments in addition to the great 
feedback Arpit already gave:
# {{RamDiskReplicaLruTracker}}: I see the patch adds an import of {{Time}}, but 
it's not used.  Was the intention to use {{Time#monotonicNow}} throughout this 
class?  The class currently uses {{System#currentTimeMillis}}, which would put 
the eviction logic at more risk of an imprecise timer or resetting the system 
clock.
# {{TestLazyPersistFiles#startUpCluster}}: I recommend calling {{fail}} if 
{{initJMX}} throws an exception instead of logging and continuing.  If JMX 
initialization fails, then it's highly likely that the tests will fail later on 
the metrics assertions.  Failing earlier would help a dev find the root cause 
faster.


 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-09-29 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14152230#comment-14152230
 ] 

Xiaoyu Yao commented on HDFS-7129:
--

Thanks [~arpitagarwal] for the review. 

We should not increment addRamDiskBytesWrite by block length in createRbw since 
at this point we are just reserving space for the block. No data has been 
written. This metric can be incremented in finalizeReplica when the final block 
length is known.
Similarly for addRamDiskBytesWriteFallback. This metric can be hard to track 
since we'd need to keep state until finalize that the write was originally 
requested on RAM Disk. We can skip this metric and just have 
incrRamDiskBlocksWriteFallback.
[Xiaoyu]: Agree and will update the addRamDiskBytesWrite and remove 
ramDiskBytesWriteFallback in the next patch.

When you call incrRamDiskBytesLazyPersisted, you can avoid the system call to 
get the file length. It is available as replicaInfo#getNumBytes
[Xiaoyu]: Agree and will fix in the next patch.

RamDiskReplica#numReads could be renamed to numAccesses.
[Xiaoyu]: I choose to differentiate read access counter so that if we support 
memory tier re-write/append in the future we just need to add a write counter 
as the write counter is always 1 for now. This can also be useful to implement 
other frequency based cache replacement algorithms.  

ramDiskBlocksEvictionWindow and ramDiskBlocksLazyPersistWindow should be 
MutableQuantile.
[Xiaoyu]: Agree and will fix in the next patch.

 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-09-29 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14152239#comment-14152239
 ] 

Xiaoyu Yao commented on HDFS-7129:
--

Thanks [~cnauroth] for the review. 

Good catch on 1! Yes, the intension is to use {code}Time#monotonicNow {code} 
throughout the class. 

Agree on 2 and will update the patch to fail the test if initJMX() fails. 

 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-09-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14152795#comment-14152795
 ] 

Chris Nauroth commented on HDFS-7129:
-

Thanks for incorporating the feedback, Xiaoyu.  +1 from me, pending response 
from Arpit on his feedback too.

 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch, HDFS-7129.2.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)