[jira] [Updated] (PHOENIX-2780) Escape double quotation in dynamic field names

2016-03-20 Thread Powpow Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Powpow Shen updated PHOENIX-2780:
-
Description: 
UPSERT a row into a table with \' (escaped single quotation) in value is 
allowed, but a row with \" (escaped double quotation) in field name is not 
allowed. ex:

{quote}
upsert into "test"("id", "static", "dynamic" varchar) values (0, 's', 'd\'');
{quote}

is OK

{quote}
upsert into "test"("id", "static", "dynamic\"" varchar) values (0, 's', 'd');
{quote}

is NOT allowed.

  was:
UPSERT a row into a table with \' (escaped single quotation) in value is 
allowed, but a row with \" (escaped double quotation) in field name is not 
allowed. ex:

{qoute}
upsert into "test"("id", "static", "dynamic" varchar) values (0, 's', 'd\'');
{quote}

is OK

{quote}
upsert into "test"("id", "static", "dynamic\"" varchar) values (0, 's', 'd');
{quote}

is NOT allowed.


> Escape double quotation in dynamic field names
> --
>
> Key: PHOENIX-2780
> URL: https://issues.apache.org/jira/browse/PHOENIX-2780
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Powpow Shen
>
> UPSERT a row into a table with \' (escaped single quotation) in value is 
> allowed, but a row with \" (escaped double quotation) in field name is not 
> allowed. ex:
> {quote}
> upsert into "test"("id", "static", "dynamic" varchar) values (0, 's', 'd\'');
> {quote}
> is OK
> {quote}
> upsert into "test"("id", "static", "dynamic\"" varchar) values (0, 's', 'd');
> {quote}
> is NOT allowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2780) Escape double quotation in dynamic field names

2016-03-20 Thread Powpow Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Powpow Shen updated PHOENIX-2780:
-
Description: 
UPSERT a row into a table with \' (escaped single quotation) in value is 
allowed, but a row with \" (escaped double quotation) in field name is not 
allowed. ex:

{qoute}
upsert into "test"("id", "static", "dynamic" varchar) values (0, 's', 'd\'');
{quote}

is OK

{quote}
upsert into "test"("id", "static", "dynamic\"" varchar) values (0, 's', 'd');
{quote}

is NOT allowed.

  was:
UPSERT a row into a table with \' (escaped single quotation) in value is 
allowed, but a row with \" (escaped double quotation) in field name is not 
allowed. ex:
{qoute}
upsert into "test"("id", "static", "dynamic" varchar) values (0, 's', 'd\'');
{quote}
is OK
{quote}
upsert into "test"("id", "static", "dynamic\"" varchar) values (0, 's', 'd');
{quote}
is NOT allowed.


> Escape double quotation in dynamic field names
> --
>
> Key: PHOENIX-2780
> URL: https://issues.apache.org/jira/browse/PHOENIX-2780
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Powpow Shen
>
> UPSERT a row into a table with \' (escaped single quotation) in value is 
> allowed, but a row with \" (escaped double quotation) in field name is not 
> allowed. ex:
> {qoute}
> upsert into "test"("id", "static", "dynamic" varchar) values (0, 's', 'd\'');
> {quote}
> is OK
> {quote}
> upsert into "test"("id", "static", "dynamic\"" varchar) values (0, 's', 'd');
> {quote}
> is NOT allowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Implement Custom Aggregate Functions in Phoenix

2016-03-20 Thread Swapna Swapna
Thank you James for providing the URL, I will be trying today as per the
directions.

On Thu, Mar 17, 2016 at 6:52 PM, James Taylor 
wrote:

> No need to register your custom UDFs. Did you see these directions:
> https://phoenix.apache.org/udf.html#How_to_write_custom_UDF?
>
> Have you tried it yet?
>
> On Thu, Mar 17, 2016 at 6:49 PM, Swapna Swapna 
> wrote:
>
>> Yes, we do have support UPPER and LOWER. I just provided as an example to
>> refer to UDF.
>>
>> For custom UDF's, i understand that we can go ahead and create custom UDF
>> jar.
>>
>> but how do we register that function?
>>
>> As per the blog, i found the below lines:
>>
>> *Finally, we'll need to register our new function. For this, you'll need
>> to edit the ExpressionType enum and include your new built-in function.
>> There's room for improvement here to allow registration of user defined
>> functions outside of the phoenix jar. However, you'd need to be able to
>> ensure your class is available on the HBase server class path since this
>> will be executed on the server side at runtime.*
>>
>>  Does that mean, to register my custom function, i should edit the 
>> *ExpressionType
>> enum *exists in Phoenix and rebuild the *phoenix jar?*
>>
>>
>>
>>
>> On Thu, Mar 17, 2016 at 6:17 PM, James Taylor 
>> wrote:
>>
>>> No, custom UDFs can be added dynamically as described here:
>>> https://phoenix.apache.org/udf.html. No need to re-build Phoenix. It's
>>> just custom aggregates that would require rebuilding.
>>>
>>> FYI, we have support for UPPER and LOWER already.
>>>
>>> On Thu, Mar 17, 2016 at 6:09 PM, Swapna Swapna 
>>> wrote:
>>>
 Thank you James for swift response.

 is the process (adding to phoenix-core and rebuild the jar)  remains
 the same for custom UDF's as well  (like as for custom aggregate 
 functions)?

 ex: we have UDF's like  UPPER, LOWER ,etc

 On Thu, Mar 17, 2016 at 5:53 PM, James Taylor 
 wrote:

> Hi Swapna,
> We don't support custom aggregate functions, only scalar functions
> (see PHOENIX-2069). For a custom aggregate function, you'd need to add 
> them
> to phoenix-core and rebuild the jar. We're open to adding them to the code
> base if they're general enough. That's how FIRST_VALUE, LAST_VALUE, and
> NTH_VALUE made it in.
> Thanks,
> James
>
> On Thu, Mar 17, 2016 at 12:11 PM, Swapna Swapna <
> talktoswa...@gmail.com> wrote:
>
>> Hi,
>>
>> I found this in Phoenix UDF documentation:
>>
>>- After compiling your code to a jar, you need to deploy the jar
>>into the HDFS. It would be better to add the jar to HDFS folder 
>> configured
>>for hbase.dynamic.jars.dir.
>>
>>
>> My question is, can that be any 'udf-user-specific' jar which need to
>> be copied to HDFS or would it need to register the function and update 
>> the
>> custom UDF classes inside phoenix-core.jar and rebuild the
>> 'phoenix-core.jar'
>>
>> Regards
>> Swapna
>>
>>
>>
>>
>> On Fri, Jan 29, 2016 at 6:31 PM, James Taylor > > wrote:
>>
>>> Hi Swapna,
>>> We currently don't support custom aggregate UDF, and it looks like
>>> you found the JIRA here: PHOENIX-2069. It would be a natural extension 
>>> of
>>> UDFs. Would be great to capture your use case and requirements on the 
>>> JIRA
>>> to make sure the functionality will meet your needs.
>>> Thanks,
>>> James
>>>
>>> On Fri, Jan 29, 2016 at 1:47 PM, Swapna Swapna <
>>> talktoswa...@gmail.com> wrote:
>>>
 Hi,

 I would like to know the approach to implement and register custom
 aggregate functions in Phoenix like the way we have built-in aggregate
 functions like SUM, COUNT,etc

 Please help.

 Thanks
 Swapna

>>>
>>>
>>
>

>>>
>>
>


[jira] [Commented] (PHOENIX-2062) Support COUNT DISTINCT with multiple arguments

2016-03-20 Thread Pranavan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15199843#comment-15199843
 ] 

Pranavan commented on PHOENIX-2062:
---

Hi!!

I am Pranavan from University of Moratuwa, Sri Lanka. I am interswted in doing 
this project. Can someone give further input in this issue?

> Support COUNT DISTINCT with multiple arguments
> --
>
> Key: PHOENIX-2062
> URL: https://issues.apache.org/jira/browse/PHOENIX-2062
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>  Labels: gsoc2016
>
> I have a situation where I want to count the distinct combination of a couple 
> of columns.
> When I try the following:-
> select count(distinct a.col1, b.col2)
> from table tab1 a
> inner join tab2 b on b.joincol = a.joincol
> where a.col3 = ‘some condition’
> and b.col4 = ‘some other condition';
> I get the following error:-
> Error: ERROR 605 (42P00): Syntax error. Unknown function: "DISTINCT_COUNT". 
> (state=42P00,code=605)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2405) Improve stability of server side sort for ORDER BY

2016-03-20 Thread Wang, Gang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15200435#comment-15200435
 ] 

Wang, Gang edited comment on PHOENIX-2405 at 3/17/16 10:10 PM:
---

Thanks [~jamestaylor], Yes, definitely, Apache Mnemonic provided a mechanism 
for client code to utilize different kinds of  device, e.g. off-heap, NVMe, SSD 
as additional memory space, note that the performance depends on those vary 
underlying devices. Furthermore, the memory allocator could be customized as 
service for your specific application logic. 

we can use the following code snippets to create a memory pool along with a 
general purpose allocator service.
{code:title=Main.java|borderStyle=solid}
new SysMemAllocator(1024 * 1024 * 20 /*capacity*/, true); /*on off-heap*/
new BigDataMemAllocator( Utils.getVolatileMemoryAllocatorService("vmem"), 1024 
* 1024 * 20 /*capacity*/,  "." /*uri*/ , true); /*on volatile storage device*/
new BigDataMemAllocator( 
Utils.getVolatileMemoryAllocatorService("pmalloc"),1024 * 1024 * 20, 
"./example.dat", true); /*on non-volatile storage device*/
{code}
and then we can use createChunk(<...>) or createBuffer(<...>) to allocate 
memory resources, those external memory resources could be reclaimed 
automatically by JVM GC or manually by your code. 

Above is used for the applications of volatile block memory mode. if there are 
some huge object graphs introduced by this sorting operations, we can use the 
volatile object mode of Mnemonic, there are another two corresponding 
non-volatile modes but I think that might not be helpful for this case.

Please refer to the example & testcase code of Apache Mnemonic for details, 
Thanks.


was (Author: qichfan):
Thanks [~jamestaylor], Yes, definitely, Apache Mnemonic provided a mechanism 
for client code to utilize different kinds of  device, e.g. off-heap, NVMe, SSD 
as additional memory space, note that the performance depends on those vary 
underlying devices. Furthermore, the memory allocator could be customized as 
service for your specific application logic. 

we can use the following code snippets to create a memory pool along with a 
general purpose allocator service.
{code:title=Main.java|borderStyle=solid}
new SysMemAllocator(1024 * 1024 * 20 /*capacity*/, true); /*on off-heap*/
new BigDataMemAllocator( Utils.getVolatileMemoryAllocatorService("vmem"), 1024 
* 1024 * 20 /*capacity*/,  "." /*uri*/ , true); /*on volatile storage device*/
new BigDataMemAllocator( 
Utils.getVolatileMemoryAllocatorService("pmalloc"),1024 * 1024 * 20, 
"./example.dat", true); /*on non-volatile storage device*/
{code}
and then we can use createChunk(<...>) or createBuffer(<...>) to allocate 
memory resources, those external memory resources could be reclaimed 
automatically by JVM GC or manually by your code. 
Please refer to the example & testcase code of Apache Mnemonic for details, 
Thanks.

> Improve stability of server side sort for ORDER BY
> --
>
> Key: PHOENIX-2405
> URL: https://issues.apache.org/jira/browse/PHOENIX-2405
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Maryann Xue
>  Labels: gsoc2016
> Fix For: 4.8.0
>
>
> We currently use memory mapped files to buffer data as it's being sorted in 
> an ORDER BY (see MappedByteBufferQueue). The following types of exceptions 
> have been seen to occur:
> {code}
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method)
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
> {code}
> [~apurtell] has read that memory mapped files are not cleaned up after very 
> well in Java:
> {quote}
> "Map failed" means the JVM ran out of virtual address space. If you search 
> around stack overflow for suggestions on what to do when your app (in this 
> case Phoenix) encounters this issue when using mapped buffers, the answers 
> tend toward manually cleaning up the mapped buffers or explicitly triggering 
> a full GC. See 
> http://stackoverflow.com/questions/8553158/prevent-outofmemory-when-using-java-nio-mappedbytebuffer
>  for example. There are apparently long standing JVM/JRE problems with 
> reclamation of mapped buffers. I think we may want to explore in Phoenix a 
> different way to achieve what the current code is doing.
> {quote}
> Instead of using memory mapped files, we could use heap memory, or perhaps 
> there are other mechanisms too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2783) Creating secondary index with duplicated columns makes the catalog corrupted

2016-03-20 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201304#comment-15201304
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-2783:
--

[~sergey.soldatov] it would be better to add the logic before creating the 
index id in sequence table and just after collecting the columns in 
MetaDataClient#createIndex.
{noformat}
// Don't re-allocate indexId on 
ConcurrentTableMutationException,
// as there's no need to burn another sequence value.
if (allocateIndexId && indexId == null) {
{noformat}

> Creating secondary index with duplicated columns makes the catalog corrupted
> 
>
> Key: PHOENIX-2783
> URL: https://issues.apache.org/jira/browse/PHOENIX-2783
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2783-1.patch
>
>
> Simple example
> {noformat}
> create table x (t1 varchar primary key, t2 varchar, t3 varchar);
> create index idx on x (t2) include (t1,t3,t3);
> {noformat}
> cause an exception that duplicated column was detected, but the client 
> updates the catalog before throwing it and makes it unusable. All following 
> attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem 
> was discussed on the user list recently. 
> The cause of the problem is that check for duplicated columns happen in 
> PTableImpl after MetaDataClient complete the server createTable. 
> The simple way to fix is to add a similar check in MetaDataClient before 
> createTable is called. 
> Possible someone can suggest a more elegant way to fix it? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2405) Improve stability of server side sort for ORDER BY

2016-03-20 Thread Yanping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15200469#comment-15200469
 ] 

Yanping Wang commented on PHOENIX-2405:
---

as Gary commented, we can use above code to allocate memory as needed. but we 
need to add code into Phoenix to make it aware there is a Mnemonic way for 
memory allocation.

> Improve stability of server side sort for ORDER BY
> --
>
> Key: PHOENIX-2405
> URL: https://issues.apache.org/jira/browse/PHOENIX-2405
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Maryann Xue
>  Labels: gsoc2016
> Fix For: 4.8.0
>
>
> We currently use memory mapped files to buffer data as it's being sorted in 
> an ORDER BY (see MappedByteBufferQueue). The following types of exceptions 
> have been seen to occur:
> {code}
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method)
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
> {code}
> [~apurtell] has read that memory mapped files are not cleaned up after very 
> well in Java:
> {quote}
> "Map failed" means the JVM ran out of virtual address space. If you search 
> around stack overflow for suggestions on what to do when your app (in this 
> case Phoenix) encounters this issue when using mapped buffers, the answers 
> tend toward manually cleaning up the mapped buffers or explicitly triggering 
> a full GC. See 
> http://stackoverflow.com/questions/8553158/prevent-outofmemory-when-using-java-nio-mappedbytebuffer
>  for example. There are apparently long standing JVM/JRE problems with 
> reclamation of mapped buffers. I think we may want to explore in Phoenix a 
> different way to achieve what the current code is doing.
> {quote}
> Instead of using memory mapped files, we could use heap memory, or perhaps 
> there are other mechanisms too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1120) Provide additional metrics beyond wall clock time

2016-03-20 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-1120:
--
Description: 
We should add additional metrics beyond wall clock time. In particular:
- cpu time
- io time
- idle time
- memory
- resource wastage
Probably a zillion more.

  was:
We should add additional metrics beyond wall clock time. In particular:
- cpu time
- io time
- idle time
Probably a zillion more.


> Provide additional metrics beyond wall clock time
> -
>
> Key: PHOENIX-1120
> URL: https://issues.apache.org/jira/browse/PHOENIX-1120
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.1.0
>Reporter: James Taylor
>  Labels: tracing
>
> We should add additional metrics beyond wall clock time. In particular:
> - cpu time
> - io time
> - idle time
> - memory
> - resource wastage
> Probably a zillion more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2211) Set a common Jetty version for Phoenix

2016-03-20 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15203426#comment-15203426
 ] 

Nishani  commented on PHOENIX-2211:
---

A common Jetty version can be included for the whole project from communicating 
with Nick Dimiduk.

> Set a common Jetty version for Phoenix
> --
>
> Key: PHOENIX-2211
> URL: https://issues.apache.org/jira/browse/PHOENIX-2211
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nishani 
>  Labels: tracing
>
> Jetty is used in the Tracing Web Application. The Jetty version used is 
> defined at the root pom file. The Jetty version that is used needs to be 
> common to the whole Phoenix project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2785) Do not store NULLs for immutable tables

2016-03-20 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created PHOENIX-2785:
--

 Summary: Do not store NULLs for immutable tables
 Key: PHOENIX-2785
 URL: https://issues.apache.org/jira/browse/PHOENIX-2785
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.7.0
Reporter: Lars Hofhansl
Priority: Minor


Currently we do store Delete markers (or explicit Nulls). For immutable tables 
that is not necessary. Null is that distinguishable from an absent column.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2228) Support CREATE TABLE in Phoenix-Calcite Integration

2016-03-20 Thread Kaide Mu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15203582#comment-15203582
 ] 

Kaide Mu commented on PHOENIX-2228:
---

Hi, [~jamestaylor] and [~rajeshbabu] after weeks exploring this ticket I seem 
that I'm unable to work with it since I'm not familiar with involved tools, if 
[~RCheungIT] is interested in, might he could take a look.
In any case my sincerest apologies for the inconvenience and thank you for 
those helps during these weeks. Hope in the near future I can contribute to 
Apache Phoenix.

> Support CREATE TABLE in Phoenix-Calcite Integration
> ---
>
> Key: PHOENIX-2228
> URL: https://issues.apache.org/jira/browse/PHOENIX-2228
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Maryann Xue
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
> Attachments: PHOENIX-2228-wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2166) Prevent writing to tracing table when tracing data collected

2016-03-20 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15203773#comment-15203773
 ] 

Nishani  commented on PHOENIX-2166:
---

Hi [~mujtabachohan]
In tracing web app it shows all the records in tracing table that includes 
reading, writing data to tracing table also. Only for visualization purposes we 
can skip those information by filtering the queries.
Is there any configuration to stop writing tracing information of tracing table 
to tracing table.

If there is no configuration level we have to change the code base.

> Prevent writing to tracing table when tracing data collected
> 
>
> Key: PHOENIX-2166
> URL: https://issues.apache.org/jira/browse/PHOENIX-2166
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.5.0
>Reporter: Mujtaba Chohan
>  Labels: gsoc2016, tracing
>
> When tracing is turned ON, trace table grows at fast pace and is filled with 
> the following traces which should not be present:
> {code}
> Executing UPSERT INTO SYSTEM.TRACING_STATS (trace_id, ...
> Writing mutation batch for table: SYSTEM.TRACING_STATS ...
> and so on
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2208) Navigation to trace information in tracing UI should be driven off of query instead of trace ID

2016-03-20 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15203786#comment-15203786
 ] 

Nishani  commented on PHOENIX-2208:
---

Hi,

We can achieve this by having a module for mapping queries to tracing IDs. And 
this module includes user interface elements and it can be plugged to all pages 
where it needs as mentioned by [~mujtabachohan]


> Navigation to trace information in tracing UI should be driven off of query 
> instead of trace ID
> ---
>
> Key: PHOENIX-2208
> URL: https://issues.apache.org/jira/browse/PHOENIX-2208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Nishani 
>
> Instead of driving the trace UI based on the trace ID, we should drive it off 
> of the query string. Something like a drop down list that shows the query 
> string of the last N queries which can be selected from, with a search box 
> for a regex query string and perhaps time range that would search for the 
> trace ID under the covers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)