Re: how to pass 2 different timestamp in RangeFilterBuilder for elasticsearch

2015-01-26 Thread Subhadip Bagui
Thanks David... I got the idea.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9dde6e20-3ec4-4a1e-929d-57acd29d467a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: how to pass 2 different timestamp in RangeFilterBuilder for elasticsearch

2015-01-24 Thread Subhadip Bagui
Sorry, my question is how to pass 2 different Joda  DateTime in 
RangeFilter. I'm trying to get an aggregated search result from es for 
every 5 min interval from current time till midnight to plot in graph. 
One way I think is to convert the DateTime in millis and pass it as long in 
RangeFilter's from() and to(). 

Is there any any better way?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a2e98534-87ad-47bb-8f3f-cc8b9fad8629%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: how to pass 2 different timestamp in RangeFilterBuilder for elasticsearch

2015-01-22 Thread Subhadip Bagui
Hi,

Any ideas?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0484e9f6-8f63-42fb-ad9d-2d3ec1629ce3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


how to pass 2 different timestamp in RangeFilterBuilder for elasticsearch

2015-01-22 Thread Subhadip Bagui
Hi,

I'm currently using the below code to get avg value for every 30min from 
current time stamp. How can I pass 2 joda DateTime in this range filter to 
query between those timestamp ? Please let me know


public static Double searchAvgResultCPU(String esIndex, String esType,
List ipList) {
String cpuMetricsQueryTime =
MessageTranslator.getMessage("es.cpu_metrics.query.time");


Iterator i = ipList.iterator();
logger.debug(GOT_THE_IP_LIST_AS + ipList);


if (ipList.size() == 0) {
logger.info(THE_IP_LIST_GOT_AS_EMPTY);
return (double) 0;
}
else {
logger.info("opening a es client connection to query..");
Client client = ESClientFactory.getInstance();


BoolQueryBuilder bqb = QueryBuilders.boolQuery();
logger.info(MAKING_BOOL_QUERY_FOR_ALL_IP_LIST);
while (i.hasNext()) {
bqb.should(QueryBuilders.termQuery(ADDRESS, i.next()));
}




*FilterBuilder fb =
FilterBuilders.rangeFilter(TIMESTAMP).from("now-" + 
cpuMetricsQueryTime).to("now");*


SearchResponse response =
client.prepareSearch(esIndex).setTypes(esType).setQuery(
QueryBuilders.filteredQuery(bqb, fb)).setSize(20)
.addAggregation(
AggregationBuilders.avg(CPU_AVERAGE).field(VALUE))
.execute().actionGet();


Avg avg = response.getAggregations().get(CPU_AVERAGE);
Double averageValue = avg.getValue();


if (averageValue.isNaN()) {
logger.debug(THE_AVERAGE_VALUE_IS_RETURNED_AS
+ averageValue + RETURNING_0_AS_OUTPUT);
return (double) 0;
}


logger.info(THE_AVERAGE_VALUE_RETURNED_IS + averageValue);
return averageValue;
}
}


Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ae3ec5f5-9bb1-44e0-b913-e42657af4afc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Correct way to use TransportClient connection object

2015-01-18 Thread Subhadip Bagui
Hi,

In the same context... some times when I'm shutting down tomcat getting the 
below exception. And other times it works. Any idea why ?

Jan 19, 2015 8:59:30 AM org.apache.catalina.core.StandardContext 
listenerStop
SEVERE: Exception sending context destroyed event to listener instance of 
class com.aricent.aricloud.es.service.ESClientFactory
java.lang.NoClassDefFoundError: 
org/elasticsearch/transport/netty/NettyTransport$4
at 
org.elasticsearch.transport.netty.NettyTransport.doStop(NettyTransport.java:403)
at 
org.elasticsearch.common.component.AbstractLifecycleComponent.stop(AbstractLifecycleComponent.java:105)
at 
org.elasticsearch.transport.TransportService.doStop(TransportService.java:100)
at 
org.elasticsearch.common.component.AbstractLifecycleComponent.stop(AbstractLifecycleComponent.java:105)
at 
org.elasticsearch.common.component.AbstractLifecycleComponent.close(AbstractLifecycleComponent.java:117)
at 
org.elasticsearch.client.transport.TransportClient.close(TransportClient.java:268)
at 
com.aricent.aricloud.es.service.ESClientFactory.shutdown(ESClientFactory.java:118)
at 
com.aricent.aricloud.es.service.ESClientFactory.contextDestroyed(ESClientFactory.java:111)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4b948428-c260-4aef-ad82-93346e7488cb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Correct way to use TransportClient connection object

2015-01-13 Thread Subhadip Bagui
Yeah. I'm using listener, but it was giving some error. Fixed it and now 
it's working. My mistake :$

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/26d40ffe-4d0b-4c49-a81c-e86e5f091b9a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


limit number of thread in ThreadPool while creating TransportClient in elasticsearch

2015-01-13 Thread Subhadip Bagui
I'm creating a TransportClient instance in elasticsearch. Below is the code 
for the same. The issue is I'm trying to lower the number of threads 
spawned with the threadpool that TransportClient initiates. 

public static TransportClient getTransportClient(String ip, int port) {


ImmutableSettings.Builder settings = ImmutableSettings
.settingsBuilder();
settings.put("cluster.name", "elasticsearch");
settings.put("threadpool.bulk.type",  "fixed");
settings.put("threadpool.bulk.size" ,5);
settings.put("threadpool.bulk.queue_size", 5);
settings.put("threadpool.index.type" , "fixed");
settings.put("threadpool.index.size" , 5);
settings.put("threadpool.index.queue_size" , 10);
settings.put("threadpool.search.type",  "fixed");
settings.put("threadpool.search.size" ,5);
settings.put("threadpool.search.queue_size", 5);
settings.build();


TransportClient instance = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(ip, port));


return instance;
}



But what ever settings I use my elasticsearch always initialing the 
threadpool with 12 threads. Guess that is the 4 time of CPU while 
calculating the threadpool size.

Please let me know what is the configuration to get the desirable threads.


Thanks,
Bagui

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fbd03973-3706-4b59-a524-31a94d305c42%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Correct way to use TransportClient connection object

2015-01-13 Thread Subhadip Bagui
Hi Jorg,

Sorry to open the thread again. But the issue is I'm getting OOM error 
currently in tomcat and the webapplication is crushing. As u suggested I'm 
calling the static TransportClient instance from contextInitialized() and 
shutting down with contextDestroyed(). But the methods are not getting 
called it seems. Trying like below.
Can you please check.

public class ESClientFactory implements ServletContextListener {

/** The logger. */
private static Logger logger = Logger.getLogger(ESClientFactory.class);

/** The instance. */
public static TransportClient instance;

/**
 * Instantiates a new eS client factory.
 */
private ESClientFactory() {
}

/**
 * Gets the single instance of ESClientFactory.
 *
 * @return single instance of ESClientFactory
 */
public static Client getInstance() {
String ipAddress = MessageTranslator.getMessage("es.cluster.ip");
int transportClientPort = 0;
String clusterName = 
MessageTranslator.getMessage("es.cluster.name");

try {
transportClientPort =
Integer.parseInt(MessageTranslator
.getMessage("es.transportclient.port"));
}
catch (Exception e) {
transportClientPort = 9300;
LogImpl.setWarning(ESClientFactory.class, e);
}

logger.debug("got the client ip as :" + ipAddress + " and port :"
+ transportClientPort);
if (instance == null) {
logger
.debug("the client instance is null, creating a new 
instance");
ImmutableSettings.Builder settings =
ImmutableSettings.settingsBuilder();
settings.put("cluster.name", clusterName);
settings.build();
instance =
new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(
ipAddress, transportClientPort));

logger.debug("returning the new created client instance...");
return instance;
}
logger
.debug("returning the existing transport client object 
connection.");
return instance;
}


@Override
public void contextInitialized(ServletContextEvent sce) {
logger.debug("initializing the servletContextListener... TransportClient");
getInstance();
}

@Override
public void contextDestroyed(ServletContextEvent sce) {
logger.debug("closing the servlet context");
shutdown();
logger.debug("successfully shutdown threadpool");
}

public synchronized void shutdown() {
if (instance != null) {
logger.debug("shutdown started");
instance.close();
instance.threadPool().shutdown();
instance = null;
logger.debug("shutdown complete");
}
}

}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/814fdb41-461b-4b5c-8bad-835039d76c24%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: elasticsearch client is giving OutOfMemoryError once connection is lost to elaticsearch server

2015-01-12 Thread Subhadip Bagui
Hi Ed,

I my case I have created a singleton TransportClient and querying ES with 
the same in frequent interval to get data. I'm not doing any bulk 
operations only searching index. threadPool needs to be shutdown once 
Tomcat stops I guess, But when my webapplication is up and running is there 
any need to shutdown the threadPool ? I'm getting this error recently. 
Previously if the ES server goes down I used to get 
NoNodeAvailableException and that's OK with me. But now the whole tomcat 
goes down and no app is working showing OutOfMemoryError. Please suggest.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/00bad24d-b036-4613-b3f3-7a2802778d24%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch client is giving OutOfMemoryError once connection is lost to elaticsearch server

2015-01-10 Thread Subhadip Bagui
Hi,

I'm using elasticsearch using TransportClient for multiple operation. The 
issue I'm facing now is if my es server goes down my client side app 
getting OutOfMemoryError.  Getting the below exception. I had to restart my 
tomcat every time after this to make my application up. Can some one please 
suggest how to prevent this. 


Jan 9, 2015 5:38:44 PM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [spring] in context with path 
[/aricloud] threw exception [Handler processing failed; nested exception is 
java.lang.OutOfMemoryError: unable to create new native thread] with root 
cause
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at 
java.util.concurrent.ThreadPoolExecutor.addThread(ThreadPoolExecutor.java:681)
at 
java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:655)
at 
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker.start(DeadLockProofWorker.java:38)
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:349)
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.(AbstractNioSelector.java:100)
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.(AbstractNioWorker.java:52)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.(NioWorker.java:45)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)


Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6930af3b-13ad-476c-82cc-7c8d792dea67%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch inserting date type documents as UTC timezone datetime while indexing

2015-01-07 Thread Subhadip Bagui
Hi,

I tried with what you suggested using below code. But getting exception. 

DateTime dateTime = new org.joda.time.DateTime();
System.out.println(dateTime);
DateTimeFormatter dateTimeFormatter = 
ISODateTimeFormat.dateTime().withZone(DateTimeZone.forTimeZone(TimeZone.getTimeZone("Asia/Calcutta")));

try {
response = client
.prepareIndex("node-test", "node-entry")
.setSource(XContentFactory.jsonBuilder()
.startObject()
.field("NodeName", nodeName)
.field("NodeStatus", nodeStatus)
.field("@timestamp", dateTime, dateTimeFormatter)
.endObject())
.execute().actionGet();
} catch (ElasticsearchException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

org.elasticsearch.transport.TransportSerializationException: Failed to 
deserialize exception response from stream
at 
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)
at 
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:123)
at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/eab0ae36-cd4e-42cc-bf8b-8e59ed24e678%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: elasticsearch startup issue with mavel

2015-01-05 Thread Subhadip Bagui
Hi Jesse,

My marvel was installed in April. I didn't upgrade es or marvel after that.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/22da0db0-3b34-4844-addd-2e2acb4ec44e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch inserting date type documents as UTC timezone datetime while indexing

2015-01-04 Thread Subhadip Bagui
Hi,

Please help with suggestions. 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/02d96b0a-99a7-43f3-b245-13418dece674%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch inserting date type documents as UTC timezone datetime while indexing

2015-01-04 Thread Subhadip Bagui
Hi,

Please help with suggestion.



On Monday, August 11, 2014 7:51:00 PM UTC+5:30, Subhadip Bagui wrote:
>
> Hi,
>
> I'm using below code to insert some documents in elasticsearch index. But 
> when inserting in the es the time is coming as UTC format rather than 
> original GMT+5:30Z format. Where as the Sysout before indexing is giving me 
> correct format as  *2014-08-11T18:23:13.447+05:30* . Please let me know 
> how to keep the orizinal time format.
>
> public static IndexResponse addDocumentsQuota(Client client, String index,
> String type, String categoryName) {
>
>  IndexResponse indexResponse = null;
>  DateTime date = new DateTime();
>  System.out.println(date);
>  try {
> indexResponse = client.prepareIndex(index, type)
>  .setSource(jsonBuilder()
> .startObject()
> .field("@timestamp", date)
> .field("category_name", categoryName)
> .field("alert_message", "alert message")
> .field("creation_time", date)
> .endObject())
> .execute()
> .actionGet();
> } catch (ElasticsearchException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
> } catch (IOException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
> } 
> return indexResponse;
> }
>
>*After indexing:*
>
> "_source": {
>"@timestamp": "2014-08-11T11:57:59.839Z",
>"category_name": "newCat1",
>"alert_message": "alert message",
>"creation_time": "2014-08-11T11:57:59.852Z"
> }
>
> Thanks,
> Subhadip
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a0a623d6-524c-4928-a35d-d042855c1834%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch startup issue with mavel

2015-01-02 Thread Subhadip Bagui
Hi,

Suddenly my elasticsearch node starts giving issue and refuse to start up.. 
Giving issue like below

[2015-01-02 17:10:23,678][ERROR][marvel.agent.exporter] [node-master] 
error connecting to [localhost:9200]
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)

When trying to access "http://localhost:9200"; its giving service 
unavailable. Please let me know what can be the issue happening suddenly. 
is it related to the mavel plugin already there in my es ?

Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b3095f69-7ee5-474f-8739-6a95c2e7daec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: elasticsearch is taking longer time while doing update and search simultaneously

2014-12-08 Thread Subhadip Bagui
Hi,

In this code I'm trying to update the field "view_mode" value as "read".


On Friday, August 22, 2014 11:25:45 PM UTC+5:30, vineeth mohan wrote:
>
> Hello Subhadip* , *
>
>
> What exactly are you trying to achieve using this code.
>
>  updateResponse = client.prepareUpdate(index, type, id)
>.setDoc(jsonBuilder()
>.startObject().field("view_mode", "read")
>.endObject())
>  .setDocAsUpsert(true)
>  .setFields("_source")
>  .setTimeout("1")
>
> I was wondering where the modification data is given.
>
> Thanks
> Vineeth
>
>
>
> On Fri, Aug 22, 2014 at 7:00 PM, Subhadip Bagui  > wrote:
>
>> Hi,
>>
>> I'm doing update in elasticsearch document and the same time one rest api 
>> is calling for search results. Below is my code.
>>
>> public String updateElasticsearchDocument(String index, String type, 
>> List indexID) {
>>  Client client = ESClientFactory.getInstance();
>>  UpdateResponse updateResponse = null;
>>  JSONObject jsonResponse = new JSONObject();
>>  JSONObject json =new JSONObject();
>>  int i=1;
>>  try {
>> for(String id : indexID)
>>  { 
>>  updateResponse = client.prepareUpdate(index, type, id)
>>   .setDoc(jsonBuilder()
>>   .startObject().field("view_mode", "read")
>>   .endObject())
>> .setDocAsUpsert(true)
>>  .setFields("_source")
>>  .setTimeout("1")
>>   .execute().actionGet();
>>  logger.info("updating the document for type= "+ 
>> updateResponse.getType()+ " for id= "+ updateResponse.getId());
>>   
>>  json.put("indexID"+i, updateResponse.getId());
>>  i++;
>> } 
>>  jsonResponse.put("updated_index", json);
>>  } catch (ActionRequestValidationException e) {
>>  logger.warn(this.getClass().getName() + ":" + "updateDocument: "
>>  + e.getMessage(), e);
>> } 
>>  catch (ElasticsearchException e) {
>>  logger.warn(this.getClass().getName() + ":" + "updateDocument: "
>>  + e.getMessage(), e);
>> e.printStackTrace();
>>  } catch (IOException e) {
>> logger.warn(this.getClass().getName() + ":" + "updateDocument: "
>>  + e.getMessage(), e);
>> } catch (JSONException e) {
>>  // TODO Auto-generated catch block
>>  e.printStackTrace();
>> }
>>  return jsonResponse.toString();
>> }
>>
>> *the search query is :*
>>
>>  POST /monitoring/quota-management/_search
>>
>> {
>>   "query": {"match": {
>>   "view_mode": "read"
>>}}, 
>> "sort": [
>>{
>>   "_timestamp": {
>>  "order": "desc"
>>   }
>>}
>> ],
>> "size": 10
>> }
>>
>> Now, I have to wait for like 40-50 seconds to get the updated search 
>> result. This is affecting the production application.
>> Please let me know what needs to be done here to minimizes the time taken.
>>
>> Thanks,
>> Subhadip
>>  
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/945425a0-69c2-46bd-b63f-a23bc6dc455c%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/945425a0-69c2-46bd-b63f-a23bc6dc455c%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e1485ab2-6aac-4ef2-830d-5d11123fd7f6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch is taking longer time while doing update and search simultaneously

2014-08-22 Thread Subhadip Bagui
Hi,

I'm doing update in elasticsearch document and the same time one rest api 
is calling for search results. Below is my code.

public String updateElasticsearchDocument(String index, String type, 
List indexID) {
Client client = ESClientFactory.getInstance();
UpdateResponse updateResponse = null;
JSONObject jsonResponse = new JSONObject();
JSONObject json =new JSONObject();
int i=1;
 try {
for(String id : indexID)
{ 
 updateResponse = client.prepareUpdate(index, type, id)
  .setDoc(jsonBuilder()
  .startObject().field("view_mode", "read")
  .endObject())
.setDocAsUpsert(true)
.setFields("_source")
.setTimeout("1")
  .execute().actionGet();
 logger.info("updating the document for type= "+ updateResponse.getType()+ 
" for id= "+ updateResponse.getId());
 
 json.put("indexID"+i, updateResponse.getId());
 i++;
} 
jsonResponse.put("updated_index", json);
 } catch (ActionRequestValidationException e) {
logger.warn(this.getClass().getName() + ":" + "updateDocument: "
+ e.getMessage(), e);
} 
catch (ElasticsearchException e) {
logger.warn(this.getClass().getName() + ":" + "updateDocument: "
+ e.getMessage(), e);
e.printStackTrace();
} catch (IOException e) {
logger.warn(this.getClass().getName() + ":" + "updateDocument: "
+ e.getMessage(), e);
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return jsonResponse.toString();
}

*the search query is :*

POST /monitoring/quota-management/_search

{
  "query": {"match": {
  "view_mode": "read"
   }}, 
"sort": [
   {
  "_timestamp": {
 "order": "desc"
  }
   }
],
"size": 10
}

Now, I have to wait for like 40-50 seconds to get the updated search 
result. This is affecting the production application.
Please let me know what needs to be done here to minimizes the time taken.

Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/945425a0-69c2-46bd-b63f-a23bc6dc455c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch inserting date type documents as UTC timezone datetime while indexing

2014-08-12 Thread Subhadip Bagui
Hi,

Can someone please give me a hint, I'm having trouble getting a solution 
for this.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d42cb94b-681d-4bd3-bf21-d955cd0af729%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch inserting date type documents as UTC timezone datetime while indexing

2014-08-11 Thread Subhadip Bagui
Hi,

Any ideas how to prevent the time changing while indexing in es, or to 
convert in correct format while query ?

Thanks,
Subhadip 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/33363b16-b2df-4bef-aded-db198c13d5ce%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch inserting date type documents as UTC timezone datetime while indexing

2014-08-11 Thread Subhadip Bagui
Hi,

I'm using below code to insert some documents in elasticsearch index. But 
when inserting in the es the time is coming as UTC format rather than 
original GMT+5:30Z format. Where as the Sysout before indexing is giving me 
correct format as  *2014-08-11T18:23:13.447+05:30* . Please let me know how 
to keep the orizinal time format.

public static IndexResponse addDocumentsQuota(Client client, String index,
String type, String categoryName) {

 IndexResponse indexResponse = null;
 DateTime date = new DateTime();
 System.out.println(date);
 try {
indexResponse = client.prepareIndex(index, type)
 .setSource(jsonBuilder()
.startObject()
.field("@timestamp", date)
.field("category_name", categoryName)
.field("alert_message", "alert message")
.field("creation_time", date)
.endObject())
.execute()
.actionGet();
} catch (ElasticsearchException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} 
return indexResponse;
}

   *After indexing:*

"_source": {
   "@timestamp": "2014-08-11T11:57:59.839Z",
   "category_name": "newCat1",
   "alert_message": "alert message",
   "creation_time": "2014-08-11T11:57:59.852Z"
}

Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5c82c9ab-a627-46a9-ad18-dda334524ab1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: elasticsearch filter query with time range using java api

2014-06-16 Thread Subhadip Bagui
Hi Alex,

Yes I tried that and it's working. Thanks :)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8addde47-a57b-4a4a-aa7a-68b419bcf90e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Correct way to use TransportClient connection object

2014-06-05 Thread Subhadip Bagui
Hi,

 I'm using the below code to get a singleton object for TransportClient 
object. I'm using the getInstance() to get the client object which is 
already alive in webapplication. 

public static Client getInstance()
{
if (instance == null)
  {
logger.debug("the client instance is null, creating a new instance");
ImmutableSettings.Builder settings = ImmutableSettings.settingsBuilder(); 
settings.put("node.client", true); 
settings.put("node.data", false); 
settings.put("node.name", "node-client");
settings.put("cluster.name", "elasticsearch");
settings.build(); 
instance = new TransportClient(settings)
.addTransportAddress(new 
InetSocketTransportAddress("10.203.238.139", 9300));
logger.debug("returning the new created client instance...");
return instance;
  }
return instance;
}

Calling the client as below from search api.
Client client = ESClientFactory.getInstance(); 

Now the issue is if I don't close client like client.close() I'm getting 
memory leak warning from webserver tomcat side. If I do close the 
connection using client.close() after search api call then I'm getting 
NoNodeAvailableException 
exception.

Please suggest what is the correct way to call the connection object.

Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e2aaf77f-cd18-4e52-98fc-c25ed03601fd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Aggregation average value is not coming correct

2014-06-05 Thread Subhadip Bagui
Hi,

I'm using the below code to get the average value of cpu_usage using 
aggregation. When I checked the output of cpu value individually and 
calculate the avg, it is not matching with the aggregation avg value. I'm 
using a boolquery along with rangeFilter here to get the data.

Please help to identify the issue.

*Code :*
public static SearchResponse searchResultWithAggregation(String es_index,
String es_type, List ipList) {
logger.debug("inside method searchResultWithAggregation...");
Client client = ESClientFactory.getInstance();
logger.debug("got the elasticsearch client connection");

BoolQueryBuilder bqb = QueryBuilders.boolQuery()
.mustNot(QueryBuilders.termQuery("address", "10.203.238.140"));

Iterator i = ipList.iterator();
logger.debug("got the ip list as :" + ipList);

while (i.hasNext()) {
bqb.should(QueryBuilders.termQuery("address", i.next()));
}

String time = "now-30m";
FilterBuilder fb = FilterBuilders.rangeFilter("@timestamp").from(time)
.to("now");

SearchResponse response = client
.prepareSearch(es_index)
.setTypes(es_type)
.setQuery(bqb)
.setPostFilter(fb)
.addAggregation(
AggregationBuilders.avg("cpu_average").field("value"))
.setSize(100).execute().actionGet();

System.out.println(response.toString());

return response;
}

*Output :*
{
  "took" : 31,
  "timed_out" : false,
  "_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
  },
  "hits" : {
"total" : 15,
"max_score" : 1.7314732,
"hits" : [ {
  "_index" : "cpu_usage_metrics",
  "_type" : "cpu_usage_metrics",
  "_id" : "UQ9vquDGTQO8WedjgCcESA",
  "_score" : 1.7314732, "_source" : 
{"status":0,"occurrences":1,"value":"1","key":"Aricloud.vm.cpu_usage.cpu.usage","client":"vm.server2","@timestamp":"2014-06-05T15:23:13+05:30","check_name":"cpu_usage_metrics","address":"10.203.238.138","command":"cpu-usage-metrics.sh
 
-s Aricloud.`hostname -s`.cpu_usage"}
}, {
  "_index" : "cpu_usage_metrics",
  "_type" : "cpu_usage_metrics",
  "_id" : "EMT85ZKcS3OuoDmHgcSEjw",
  "_score" : 1.7314732, "_source" : 
{"status":0,"occurrences":1,"value":"3","key":"Aricloud.vm.cpu_usage.cpu.usage","client":"vm.server2","@timestamp":"2014-06-05T15:25:13+05:30","check_name":"cpu_usage_metrics","address":"10.203.238.138","command":"cpu-usage-metrics.sh
 
-s Aricloud.`hostname -s`.cpu_usage"}
}, {
  "_index" : "cpu_usage_metrics",
  "_type" : "cpu_usage_metrics",
  "_id" : "0Pf-XKZmTI-wpADuIVToFA",
  "_score" : 1.7314714, "_source" : 
{"status":0,"occurrences":1,"value":"3","key":"Aricloud.vm.cpu_usage.cpu.usage","client":"vm.server2","@timestamp":"2014-06-05T15:21:13+05:30","check_name":"cpu_usage_metrics","address":"10.203.238.138","command":"cpu-usage-metrics.sh
 
-s Aricloud.`hostname -s`.cpu_usage"}
}, {
  "_index" : "cpu_usage_metrics",
  "_type" : "cpu_usage_metrics",
  "_id" : "Pdn5h2gGRsK0hL2DKj0ZjA",
  "_score" : 1.7314714, "_source" : 
{"status":0,"occurrences":1,"value":"2","key":"Aricloud.vm.cpu_usage.cpu.usage","client":"vm.server2","@timestamp":"2014-06-05T15:27:13+05:30","check_name":"cpu_usage_metrics","address":"10.203.238.138","command":"cpu-usage-metrics.sh
 
-s Aricloud.`hostname -s`.cpu_usage"}
}, {
  "_index" : "cpu_usage_metrics",
  "_type" : "cpu_usage_metrics",
  "_id" : "5_mloLYMSgKRb_lnH7pqGQ",
  "_score" : 1.7314714, "_source" : 
{"status":0,"occurrences":1,"value":"3","key":"Aricloud.vm.cpu_usage.cpu.usage","client":"vm.server2","@timestamp":"2014-06-05T15:33:13+05:30","check_name":"cpu_usage_metrics","address":"10.203.238.138","command":"cpu-usage-metrics.sh
 
-s Aricloud.`hostname -s`.cpu_usage"}
}, {
  "_index" : "cpu_usage_metrics",
  "_type" : "cpu_usage_metrics",
  "_id" : "xjBgO2cXTH-DIQoNpIRnBA",
  "_score" : 1.7314714, "_source" : 
{"status":0,"occurrences":1,"value":"4","key":"Aricloud.vm.cpu_usage.cpu.usage","client":"vm.server2","@timestamp":"2014-06-05T15:35:13+05:30","check_name":"cpu_usage_metrics","address":"10.203.238.138","command":"cpu-usage-metrics.sh
 
-s Aricloud.`hostname -s`.cpu_usage"}
}, {
  "_index" : "cpu_usage_metrics",
  "_type" : "cpu_usage_metrics",
  "_id" : "0sclBpwcRQmfyKklXPJbow",
  "_score" : 1.7314694, "_source" : 
{"status":0,"occurrences":1,"value":"3","key":"Aricloud.vm.cpu_usage.cpu.usage","client":"vm.server2","@timestamp":"2014-06-05T15:29:13+05:30","check_name":"cpu_usage_metrics","address":"10.203.238.138","command":"cpu-usage-metrics.sh
 
-s Aricloud.`hostname -s`.cpu_usage"}
}, {
  "_index" : "cpu_usage_metrics",
  "_type" : "cpu_usage_metrics",
  "_id" : "O5CVCkYtRQGxJG3TJWDoNw",
  "_score" : 1.7314694, "_source" : 
{"status":0,"occurrences":1,"value":"4","key":"Aricloud.vm.cpu_usage.cpu.usage","client":"vm.server2","@timestamp":"2014-06-05T15:39:13+05:30","check_name":"cpu_usage_metrics","address":"10.203.238.138","command":"cpu-usage-metrics.sh
 
-s Aricloud.`hostname -s`.cpu_usage"}
}, {

elasticsearch QueryBuilder with dynamic value in term query

2014-06-04 Thread Subhadip Bagui


I have a code like below where I'm doing multiple must in bool query. Here 
I'm passing the must term queries in field "address". Now the ip address 
will come to me as a list from other api and I have to pass for all the 
ip's in the list as a must term query. Here I'm not getting a way how to 
pass the address values dynamically when creating the QueryBuilder.

Please suggest how to do this.

public static SearchResponse searchResultWithAggregation(String es_index,
String es_type, List ipList, String queryRangeTime) {
Client client = ESClientFactory.getInstance();

QueryBuilder qb = QueryBuilders.boolQuery()
.must(QueryBuilders.termQuery("address", "10.203.238.138"))
.must(QueryBuilders.termQuery("address", "10.203.238.137"))
.must(QueryBuilders.termQuery("address", "10.203.238.136"))
.mustNot(QueryBuilders.termQuery("address", "10.203.238.140"))
.should(QueryBuilders.termQuery("client", ""));

queryRangeTime = "now-" + queryRangeTime + "m";
FilterBuilder fb = FilterBuilders.rangeFilter("@timestamp")
.from(queryRangeTime).to("now");

SearchResponse response = client
.prepareSearch(es_index)
.setTypes(es_type)
.setQuery(qb)
.setPostFilter(fb)
.addAggregation(
AggregationBuilders.avg("cpu_average").field("value"))
.setSize(10).execute().actionGet();

System.out.println(response.toString());
return response;}


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1b6d7ba7-5cc5-4f26-abce-9e6614d39ed4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch filter query with time range using java api

2014-06-04 Thread Subhadip Bagui
Hi,

I have a document like below

{
"_index": "cpu_usage_metrics",
"_type": "cpu_usage_metrics",
"_id": "CKAAs1n8TKiR6FncC5NLGA",
"_score": 1,
"_source": {
   "status": 0,
   "occurrences": 1,
   "value": "33",
   "key": "vm.server2.cpu.usage",
   "client": "vm.server2",
   "@timestamp": "2014-06-03T20:18:19+05:30",
   "check_name": "cpu_usage_metrics",
   "address": "10.203.238.138",
   "command": "cpu-usage-metrics.sh"
}
 }

 I want to do a filtered query with time range using java api like below

"filter": {
"range": {
"@timestamp": {
 "to": "now",
"from": "now - 5mins"
}
}
}


Please suggest how to form the Filter in java api.

Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f596658e-5ec4-42ac-abc1-4f99416be101%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: avg aggregation on string values

2014-06-03 Thread Subhadip Bagui
Hi,

please suggest...

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/cef8d6e2-95f7-4040-9683-df188f1717cb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


avg aggregation on string values

2014-06-03 Thread Subhadip Bagui
Hi,

I've below kind of documents in elasticsearch which is coming from 3rd 
party api and there are values upon which I want to do aggregation.  

{
"_index": "virtualmachines",
"_type": "nodes",
"_id": "103",
"_score": 1,
"_source": {
   "NODE_ID": "12335",
   "CLOUD_TYPE": "AWS-EC2",
   "NODE_GROUP_NAME": "MYSQL",
   "NODE_CPU": "4GHZ",
   "NODE_HOSTNAME": "cloud.aricent.com",
   "NODE_NAME": "aws-node1",
   "NODE_PRIVATE_IP_ADDRESS": "10.123.124.126",
   "NODE_PUBLIC_IP_ADDRESS": "125.31.108.73",
   "NODE_INSTANCE_ID": "aws111",
   "NODE_STATUS": "INACTIVE",
   "NODE_CATEGORY_ID": "2",
   "NODE_CREATE_TIME": "2014-05-22 14:40:35",
   "CPU_SPEED": "500",
   "MEMORY": 512,
   "CPU_USED": "0.02%"
 }

Here is my code from where I do aggregation

SearchResponse response = client.prepareSearch("virtualmachines")
.setTypes("nodes").setQuery(QueryBuilders.matchAllQuery())
//.addAggregation(AggregationBuilders.avg("mem_average").field("CPU_SPEED"))
.addAggregation(AggregationBuilders.avg("mem_average").script("doc['CPU_USED'].value"))
.execute().actionGet();
*error :*
Exception in thread "main" 
org.elasticsearch.transport.TransportSerializationException: Failed to 
deserialize exception response from stream
at 
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)

the mapping for CPU_SPEED is 
"CPU_USED": {
  "type": "string"
   }

If I change the mapping to type long then the same is working. Is there any 
way to get the string value and do the aggregation.

Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/bff62cd0-65af-4090-90d3-2d14aeb3363a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: average on string values containing % in elasticsearch

2014-05-25 Thread Subhadip Bagui
Any help on this ?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/cc52b02a-f71f-4bfa-9219-5f5a3a6b25b2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: average on string values containing % in elasticsearch

2014-05-25 Thread Subhadip Bagui
any suggestion ?

Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/71bcd469-d1b7-4b37-87fd-b7100cb9e5ad%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: aggregation with average value in java api

2014-05-24 Thread Subhadip Bagui
Thank you Adrien, I'm getting the aggregation value now.

One doubt here. I have a field which stores values likes "CPU_USED" : 
"0.04%"
Want to do aggregation on that. Can I do any string manipulation here on 
the field passed on AggregationBuilders ? Tried like this but not working . 
Please suggest.
addAggregation(AggregationBuilders.avg("cpu_used_avg").script("doc['CPU_USED'].value".replace("%",
 
"")))

Thanks
Subhadip 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8635fb87-8d03-44ac-866a-550b5bb07dce%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


average on string values containing % in elasticsearch

2014-05-24 Thread Subhadip Bagui
Hi,

I have a field of type string in documents and there are values like 
"CPU_USED": "0.02%". I want to do average of all the values using some 
filters. I tried with below but getting error. Please suggest how to do 
this.

POST /virtualmachines/_search
{
"query" : {
"filtered" : {
"query" : { "match" : {
  "CLOUD_TYPE" : "CLOUDSTACK" 
}},
"filter" : {
"range" : { "NODE_CREATE_TIME" : { "from" : "2014-05-22 
14:11:35", "to" : "2014-05-22 14:33:35" }}
}
}
},
"aggs" : {
"avg_cpu" : { "avg" : { "script" : "doc['CPU_USED'].value" } }
}
}

*error :*
AggregationExecutionException[Unsupported script value [0.03%]]

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/02462a04-aedb-4049-b650-9def0de582cb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


aggregation with average value in java api

2014-05-23 Thread Subhadip Bagui
Hi,

I'm trying to do aggregation using java api. But couldn't get the 
*SearchResponse* correctly. Getting below error. I've written method like 
below using XContentBuilder. 
Couldn't get any working scenarios from google. Please suggest any working 
example for aggregation of single value.

* public static Aggregations searchAggregation() throws IOException {*

* Client client = ESClientFactory.getInstance();*

* XContentBuilder contentBuilder = XContentFactory.jsonBuilder()*

* .startObject("aggs").startObject("memory_average")*

* .startObject("avg").field("field", "MEMORY").endObject()*

* .endObject().endObject();*


* System.out.println(contentBuilder.string());*

* SearchResponse response = client.prepareSearch("virtualmachines")*

* .setTypes("nodes").setQuery(QueryBuilders.matchAllQuery())*

* .setAggregations(contentBuilder).execute().actionGet();*

* client.close();*

* System.out.println(response);*

* return response.getAggregations();*

* }*

*Error:*

Exception in thread "main" 
*org.elasticsearch.transport.TransportSerializationException*: Failed to 
deserialize exception response from stream

  at 
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(
*MessageChannelHandler.java:169*)

  at 
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(
*MessageChannelHandler.java:123*)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9c802ccd-1c2d-4d7a-b6f5-c0f045a5097d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: how to get only aggregation values from elasticsearch

2014-05-23 Thread Subhadip Bagui
Thanks a lot Ivan. Will try with Java API as u suggested.

One more doubt. There is a field in my document like "CPU_USED": "0.03%"
Can we do aggregation avg on this field ? I tried but got exception like 
below as this was a string field. Can u please suggest.

"reason": "QueryPhaseExecutionException[[virtualmachines][2]: 
query[filtered(CLOUD_TYPE:cloudstack)->cache(NODE_CREATE_TIME:[1400767895000 
TO 1400769215999])],from[0],size[0]: Query Failed [Failed to execute main 
query]]; nested: AggregationExecutionException[Unsupported script value 
[0.03]]; "
*Query :*
POST /virtualmachines/_search
{
"query" : {
"filtered" : {
"query" : { "match" : {
  "CLOUD_TYPE" : "CLOUDSTACK" 
}},
"filter" : {
"range" : { "NODE_CREATE_TIME" : { "from" : "2014-05-22 
14:11:35", "to" : "2014-05-22 14:33:35" }}
}
}
},
"aggs" : {
"avg_grade" : { "avg" : { "script" : "doc['CPU_USED'].value" } }
},
"size": 0
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/bd12c67b-9369-4231-90ec-a818a6e9be9a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: how to get only aggregation values from elasticsearch

2014-05-22 Thread Subhadip Bagui

Thanks Ivan for your reply. 
I tried with *size : 0* but still the metadata is coming. Is there any way 
to suppress the metadata. 
Can I use java api to do so. Earlier I used the method like below to get 
the source response only. Can we do that for aggregation also?

public String searchESIndex(String index, String type) {
Client client = ESClientFactory.getInstance();
SearchRequestBuilder srequest = client.prepareSearch(index)
.setTypes(type).setSearchType(SearchType.QUERY_AND_FETCH)
.setQuery(QueryBuilders.matchQuery("status", "SUSPEND"))
.setSize(1);
SearchResponse sresp = srequest.execute().actionGet();
List> listOfSource = new ArrayList>();
for (SearchHit hits : sresp.getHits()) {
listOfSource.add(hits.getSource());
}
Gson gson = new Gson();
String jsonSourceList = gson.toJson(listOfSource);
return jsonSourceList;
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/47f686b0-e844-410b-955e-6dfb88221569%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


how to get only aggregation values from elasticsearch

2014-05-22 Thread Subhadip Bagui
Hi,

I want to get the average value of MEMORY field from my ES document. Below 
is the query I'm using for that. Here I'm getting the aggregation along 
with the hits Json also. Is there any way we can get the aggreation result 
only. Please suggest.

POST /virtualmachines/_search
{
"query" : {
"filtered" : {
"query" : { "match" : {
  "CLOUD_TYPE" : "CLOUDSTACK" 
}},
"filter" : {
"range" : { "NODE_CREATE_TIME" : { "from" : "2014-05-22 
14:11:35", "to" : "2014-05-22 14:33:35" }}
}
}
},
"aggs" : {
"memory_avg" : { "avg" : { "field" : "MEMORY" } }
}
}

*response* : 
{
   "took": 2,
   "timed_out": false,
   "_shards": {
  "total": 3,
  "successful": 3,
  "failed": 0
   },
   "hits": {
  "total": 1,
  "max_score": 1,
  "hits": [
 {
"_index": "virtualmachines",
"_type": "nodes",
"_id": "102",
"_score": 1,
"_source": {
   "NODE_ID": "12235",
   "CLOUD_TYPE": "CLOUDSTACK",
   "NODE_GROUP_NAME": "JBOSS",
   "NODE_CPU": "4GHZ",
   "NODE_HOSTNAME": "cloud.aricent.com",
   "NODE_NAME": "cloudstack-node1",
   "NODE_PRIVATE_IP_ADDRESS": "10.123.124.125",
   "NODE_PUBLIC_IP_ADDRESS": "125.31.108.72",
   "NODE_INSTANCE_ID": "cloudstack112",
   "NODE_STATUS": "ACTIVE",
   "NODE_CATEGORY_ID": "13",
   "NODE_CREATE_TIME": "2014-05-22 14:23:04",
   "CPU_SPEED": "500",
   "MEMORY": 512,
   "CPU_USED": "0.03%"
}
 }
  ]
   },
   "aggregations": {
  "memory_avg": {
 "value": 512
  }
   }
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a8538769-7ba5-4ad5-b190-5f0b8d25a26b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch TransportClient singleton instance

2014-04-09 Thread Subhadip Bagui
Hi,

I'm trying to use a singleton instance of client for multiple index 
creation. Below is the code for the same. But every time I'm getting 
instance as null and it's creating a new instance. Please let me know what 
I'm doing wrong

*singleton instance* :

public class ESClientSingleton {

public static Client instance ;

private ESClientSingleton()
{}
 public static Client getInstance()
{
if (instance == null)
  {
System.out.println("the instance is null...");
ImmutableSettings.Builder settings = ImmutableSettings.settingsBuilder(); 
settings.put("node.client", true); 
settings.put("node.data", false); 
settings.put("node.name", "node-client");
settings.put("cluster.name", "elasticsearch");
settings.build(); 
instance = new TransportClient(settings)
.addTransportAddress(new 
InetSocketTransportAddress("10.203.251.142", 9300));
//instance = client;
return instance;
  }
return instance;
} 
}

*calling method* :

public static IndexResponse insertESDocument(String nodeName, String json)
  {  
  Client client = ESClientSingleton.getInstance();
  logger.debug("calling the es client");
  logger.debug("the json received as == "+json);
  IndexResponse response = 
client.prepareIndex("aricloud-nodes","node-entry",nodeName )
 .setSource(json)
 .execute()
 .actionGet();
  logger.debug("the document is successfully indexed...");
  System.out.println("the document is indexed...");
  //client.close();
 return response; 
  } 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/36c95b86-5fa3-4af0-998a-a1b15b0bd1ab%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


getting "Network is unreachable" while running elasticsearch from windows

2014-04-08 Thread Subhadip Bagui
Hi,

I'm running elasticsearch through windows using command prompt. The same 
started as below.

D:\elasticsearch\elasticsearch-1.0.1\bin>elasticsearch
[2014-04-08 13:39:00,199][WARN ][bootstrap] jvm uses the 
client vm, make sure to run `java` with the ser
ver vm for best performance by adding `-server` to the command line
[2014-04-08 13:39:00,379][INFO ][node ] [node-master] 
version[1.0.1], pid[11568], build[5c03844/2014
-02-25T15:52:53Z]
[2014-04-08 13:39:00,379][INFO ][node ] [node-master] 
initializing ...
[2014-04-08 13:39:00,411][INFO ][plugins  ] [node-master] 
loaded [], sites []
[2014-04-08 13:39:06,237][INFO ][node ] [node-master] 
initialized
[2014-04-08 13:39:06,237][INFO ][node ] [node-master] 
starting ...
[2014-04-08 13:39:06,651][INFO ][transport] [node-master] 
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, p
ublish_address {inet[/10.204.226.15:9300]}
[2014-04-08 13:39:10,084][INFO ][cluster.service  ] [node-master] 
new_master [node-master][DvA9zQxzR_6pSY6ESeaFc
w][BGHWV2188][inet[/10.204.226.15:9300]], reason: zen-disco-join 
(elected_as_master)
[2014-04-08 13:39:10,131][INFO ][discovery] [node-master] 
elasticsearch/DvA9zQxzR_6pSY6ESeaFcw
[2014-04-08 13:39:10,382][INFO ][http ] [node-master] 
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, p
ublish_address {inet[/10.204.226.15:9200]}
[2014-04-08 13:39:11,471][INFO ][gateway  ] [node-master] 
recovered [4] indices into cluster_state
[2014-04-08 13:39:11,473][INFO ][node ] [node-master] 
started


The elasticsearch cluster is working fine but if I left the cmd 
window inactive for long time I'm getting the below exception for network 
connection. Whereas the REST client is working fine and I'm able to do GET 
, POST on existing index. Can someone let me know why this is happening.

---
[2014-04-08 15:37:59,401][WARN ][monitor.jvm  ] [node-master] 
[gc][young][7123][4] duration [3.8s], collect
ons [1]/[4.1s], total [3.8s]/[4s], memory [95.6mb]->[29.8mb]/[998.4mb], 
all_pools {[young] [68.3mb]->[4.1mb]/[204.8mb]}
[survivor] [8.5mb]->[4.5mb]/[25.5mb]}{[old] [18.8mb]->[21.2mb]/[768mb]}
[2014-04-08 22:21:34,779][WARN ][monitor.jvm  ] [node-master] 
[gc][young][10162][5] duration [2.1s], collec
ions [1]/[2.3s], total [2.1s]/[6.1s], memory [94mb]->[26.7mb]/[998.4mb], 
all_pools {[young] [68.3mb]->[3.1mb]/[204.8mb]
{[survivor] [4.5mb]->[2.3mb]/[25.5mb]}{[old] [21.2mb]->[21.2mb]/[768mb]}
[2014-04-09 11:27:27,628][WARN ][transport.netty  ] [node-master] 
exception caught on transport layer [[id: 0xc
3ad957, 0.0.0.0/0.0.0.0:63427]], closing connection
java.net.SocketException: Network is unreachable: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:150)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/925dc2a8-b79a-4e40-8985-cbfc06992d9d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: getting MasterNotDiscoveredException for ES node for client

2014-04-08 Thread Subhadip Bagui
Hi,

The issue is that if I leave ES inactive for some time I get 
java.net.SocketException: 
Network is unreachable error. I have to restart ES to get it worked. May be 
too many node or client is created.
So I'm trying to use the same client instance created for all ES. But the 
singleton instance is not returning. 

The cluster is working fine and through REST its GET I'm getting all 
indexes.

Thanks
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/92c69cdc-a06c-45ea-a8a2-7c984de95fbf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: getting MasterNotDiscoveredException for ES node for client

2014-04-08 Thread Subhadip Bagui
Hi David,

The singleton instance for client I'm not getting for existing client. 
Trying something like this. Can you please give some working example.

public class ESClientSingleton {

private static ESClientSingleton instance;
private ESClientSingleton()
{

}

public static Object getInstance()
{
if (instance == null)
  {
ImmutableSettings.Builder settings = ImmutableSettings.settingsBuilder(); 
settings.put("node.client", true); 
settings.put("node.data", false); 
settings.put("node.name", "node-client");
settings.put("cluster.name", "elasticsearch");
settings.build(); 
Client client = new TransportClient(settings)
.addTransportAddress(new 
InetSocketTransportAddress("localhost", 9300));
return client;
  }
return instance;
} 
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7057090c-6b4f-4877-9a58-77fa3d16f2fb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: getting MasterNotDiscoveredException for ES node for client

2014-04-08 Thread Subhadip Bagui
Thanks a lot David :)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ecf94974-ff55-4fc9-9d09-1d2bd8a48060%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: getting MasterNotDiscoveredException for ES node for client

2014-04-08 Thread Subhadip Bagui
Hi David,

I'm trying to store all cloud event related data like create node, delete 
node etc. in ES. So whenever a new event occurs I'm creating a new node and 
client, doing the indexing and then closing the node.

As you said, should I use same node created and use that in all occurence? 
like using a singleton? or can I create node every time with same name and 
close it after ES operation done? Pls suggest.

Thanks.
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1c2eba75-2653-48a5-9906-9b72b156d64f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: getting MasterNotDiscoveredException for ES node for client

2014-04-07 Thread Subhadip Bagui
Hi David,

If I restart ES the same is working. But gives me this issue if multiple 
call made after some time.

My client and server are on the same system. I stopped the firewall also. 
Any suggestion how to make multicast working. Can I create and close client 
node for multiple times?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/eef4ddbf-884e-41e6-a387-b1980cf23320%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: getting MasterNotDiscoveredException for ES node for client

2014-04-07 Thread Subhadip Bagui
Hi,

If I remove client(true) from node initialization then I'm successfully 
indexing documents. But I want to create this node as client only without 
shards being allocated to them, as stated 
in 
http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/client.html

Also I'm getting below exception when creating index for existing shards. 
Please inform what is the issue here... 
--
07-04-2014 22:33:30,813  WARN 
[elasticsearch[node-master][clusterService#updateTask][T#1]] 
org.elasticsearch.common.logging.log4j.Log4jESLogger 129 - [node-master] 
failed to connect to node 
[[Spoilsport][BVej5jdZRIearNwMRcrCYg][Subhadip-PC][inet[/192.168.1.3:9302]]{client=true,
 
data=false}]

org.elasticsearch.transport.ConnectTransportException: 
[Spoilsport][inet[/192.168.1.3:9302]] connect_timeout[30s]

at 
org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:718)

at 
org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:647)

at 
org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:615)

at 
org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129)


--
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6ed9c982-ea2e-41dc-bdb4-cf4590075229%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: getting MasterNotDiscoveredException for ES node for client

2014-04-07 Thread Subhadip Bagui
Please suggest. I'm getting connection error in elasticsearch end even if 
the node is running.

[2014-04-07 17:22:40,109][WARN ][transport.netty  ] [node-master] 
exception caught on transport layer [[id: 0xd2
f92b59, 0.0.0.0/0.0.0.0:64313]], closing connection
java.net.SocketException: Network is unreachable: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:150)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at 
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0dd852b6-e5a1-46ce-994c-496bbac0575e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


getting MasterNotDiscoveredException for ES node for client

2014-04-07 Thread Subhadip Bagui
Hi,

I'm using the below code for indexing data from cloud. But getting the 
below exception while calling prepareIndex . Please let me know why the 
exception is coming.

  public static IndexResponse insertESDocument(String nodeName, String json)
  {  
  Node node = 
nodeBuilder().clusterName("elasticsearch").client(true).data(false).node(); 
 
  Client client = node.client();
  logger.debug("the node has been created with == " 
+node.settings().getAsMap());
  logger.debug("the json received as == "+json);
  
  IndexResponse response = 
client.prepareIndex("aricloud-nodes","node-entry",nodeName )
 .setSource(json)
 .execute()
 .actionGet();
  client.close();
  node.close();
  return response; 
  }

--- Exception
07-04-2014 17:45:48,073  WARN [http-apr-8080-exec-4] xxx 1588 - 
RunNodesException
org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [1m]
at 
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(TransportMasterNodeOperationAction.java:180)
at 
org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:491)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/34e8098a-5a0c-40dd-a5a0-7ee2d22fce4c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


how to get all field values of a document which has been set during GetResponse

2014-04-04 Thread Subhadip Bagui
Hi,

I'm to trying to get the filed values of a document which is already been 
set. But I have to pass the field name while doing so. 
In method call I may not know the fields whatever some one has already set. 
Can I get all the fileds from GetField without passing the field name ?

public static GetResponse getResponse(Client client)
  {
 GetResponse response = client.prepareGet("testindex","testtype", 
"1")
  .setFields("author","title")
  .execute().actionGet();
 return response; 
  }
 public String getFieldValues()
{
   Node node = nodeBuilder().client(true).data(false).node();
   Client client = node.client();
   
   GetResponse response = getResponse(client);
   Map respMap = response.getFields();

   GetField field = respMap.get("title");
   String fields = (String)field.getValue();
   
   client.close();
   node.close();

   return fields;
}

Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/010791d9-1e92-45a1-8be4-7df588ba9914%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: data distribution over shards and replicas

2014-04-02 Thread Subhadip Bagui
Thanks a lot Mark. That explains a lot.

By backup I meant copy of same data.

One last question, for fast searching what will be the better selection? 
single index multiple shards or multiple index single shard?
 
Can you please give some reference how lucene splits documents and store in 
shards. That will help to get better idea.


Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3785b01c-328f-4f4c-8dab-db93b73b2b5c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: data distribution over shards and replicas

2014-04-02 Thread Subhadip Bagui


Thanks Mark for the prompt reply, I have some more doubts

1. Suppose one index is running with 3 shards and 1 replica and other index 
is running with the cluster settings i.e. 5 shards 2 replica then total 3+1 
or 5+2 shards will be available in cluster? I have installed 
elasticsearch-head plugin but the replica shard is not showing there. 

For data distribution, replica shard also keeps other index documents or it 
will be used to keep backup copy of data only.

2. So documents under same index will be split due to sharding and 
distribute over the shards right ? Can we push all the documents for same 
index in a particular shard? I don't want to use custom routing as then I 
need one field value common for all the documents. How can we find out 
which shard is holding which documents?

3. If I make one index with 2 shards and no replica and the node in cluster 
holding this 2 shards dies, then will I lose the data, or the data will 
have a copy in cluster level replica? If I have only 1 replica and the node 
holds the replica dies then how the backup will happen?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4d7d0243-dcd1-4ac7-9fef-1d6e44599ea1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


data distribution over shards and replicas

2014-03-31 Thread Subhadip Bagui
Hi,

I've started working on elasticsearch and having some doubts about shards 
and replicas and how they handle data. I don't have any prior knowledge on 
Lucene.
As I know lucene will split data in segments and store in disk, and shard 
is the lucene index itself. Some of the doubts which I have is...

1) There are two way we can do shard allocation, one in cluster level with 
config settings and another in index level settings. Suppose in cluster 
level I mentioned max shard is 3 and in index level I mentioned 5 shards, 
how the shards will be allocated?  I have one cluster one node.

2) Suppose, one index is having 5 shards and 2 replicas and I'm pushing 
data in bulk api, how the data will be stored? Is same data will be stored 
in 5 shards or the data will split and store in chunks in 5 shards? How 
replicas will have backup of data of all 5 shards? 

3) Suppose I have 5 nodes and 10 shards are distributed over the nodes, 2 
shards each. So when I index new documents how the data will be stored in 
over the nodes? 
Suppose the 5th node goes down suddenly which is holding 9th and 10th 
shard. Now do I loose all the data stored in 9th and 10th shard or the data 
are already copied in rest of the nodes ?

Please explain.

Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4ac575bd-0d0a-4f5f-972e-7f3c54f2eb85%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Java API or REST API for client development ?

2014-03-26 Thread Subhadip Bagui

My app is in Java only. So what I mean is should I use elasticsearch Java 
client or available REST api's only using HttpClient and all.

What will be more flexiable for multiplatform ingration ?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2cd7714f-ff38-44a5-a5c2-fdde329c9874%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Java API or REST API for client development ?

2014-03-26 Thread Subhadip Bagui
Hi, 

We have a cloud management framework where all the event data are to be 
stored in elasticsearch. I have to start the client side code for this.
 
I need a suggestion here. Which one should I use, elasticsearch Java API or 
REST API for the client ?

Kindly suggest and mention the pros and cons for the same so it will be 
easy for me to decide the product design than latter hassel.

Subhadip
 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a697bc63-cb55-4c9c-9a5e-2b315e443a0a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: not getting results from java search api

2014-03-25 Thread Subhadip Bagui
I changed the SearchType as SearchType.QUERY_AND_FETCH and its giving 
correct response now.
Thanks...

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0bb84d71-9c83-45a2-b4ff-00c775fdb598%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


not getting results from java search api

2014-03-25 Thread Subhadip Bagui
Hi,

I'm using the below method to get results from ES. But search hit result is 
0. Please let me know the correct way to get results,

  public static SearchResponse searchIndex(Client client, Node node)
  {
  SearchRequestBuilder srequest = 
client.prepareSearch("testindex").setTypes("testtype")
   .setSearchType(SearchType.DFS_QUERY_THEN_FETCH)
   .setQuery(QueryBuilders.termQuery("author", "shay"))
   .setSize(1);
  SearchResponse sresp = srequest.execute().actionGet();
  return sresp;
  }

data ==> 

   - "_source":{
  - "tags":[
 - "elasticsearch"
  ],
  - "content":"ElasticSearch provides the Java API",
  - "author":"Shay Banon",
  - "title":"ElasticSearch: Java API",
  - "postDate":"2014-03-21T09:32:34.782Z"
   }


Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/293d28fb-5aaa-40cf-a77a-c7966c37536d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: aggreation values with date time

2014-03-21 Thread Subhadip Bagui
Hi,

Pls suggest...

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5980b416-4b2d-4fae-9eb7-73e333d13fa9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


aggreation values with date time

2014-03-21 Thread Subhadip Bagui
Hi,

I'm using the below aggs for getting max, avg values from 2 data fields. 
But the result is coming in unix timeformat I guess. Can I get the result 
normal time format,

query==>
 "aggs" : {
"max_time" : { 
"max" : { 
"script" : "doc['gi_xdr_info_end_time'].value - 
 doc['gi_xdr_info_start_time'].value" 
} 
},...   
}
 
result ==>
   

   - "aggregations":{
  - "max_time":{
 - "value":6.0
  },
  - "min_time":{
 - "value":0.0
  },
  - "avg_time":{
 - "value":4.6664
  }
   }


mapping ==>

 "call_start_time":{

   - "type":"date",
   - "format":"dateOptionalTime"

}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/99d63453-b863-4954-9f0f-ec7ccfd522e2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: not able to read reserved character in query dsl

2014-03-18 Thread Subhadip Bagui
I have changed mapping not to analyze and it's giving correct data. Also 
tried with bool query for terms and giving correct search for exact match. 
Got the logic now.
Thanks a lot.

-
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/afa79050-52f6-4846-a945-523ae7d7883a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: not able to read reserved character in query dsl

2014-03-18 Thread Subhadip Bagui
Thanks David. Tried with below match query and it's working fine.
   {
 "match": {
   "CLOUD_TYPE" : {
"query": "AWS-EC2",
  "type": "phrase"
   }
}  

As you said I can send the exact same term which is in the inverted index. 
Please let me know how to check the term which has been indexed and pass it 
in query.

I tried to get the analyzed value like this,

*GET **/_analyze?text=AWS-EC2*
{
   
   - "tokens":[
  - {
 - "token":"aws",
 - "start_offset":0,
 - "end_offset":3,
 - "type":"",
 - "position":1
  },
  - {
 - "token":"ec2",
 - "start_offset":4,
 - "end_offset":7,
 - "type":"",
 - "position":2
  }
   ]

}

now I tried with this but no search result.
{ "filter": { "term": { "CLOUD_TYPE": "aws ec2" } } }

Thanks,
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8e7eb194-54d1-440d-a2ab-e57d15fb6f93%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: not able to pass multiple term in bool query

2014-03-17 Thread Subhadip Bagui
Hi Binh,

I was an issue with json format creation. Now I validated the json and 
getting result.
Thanks..

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/19af3f5d-b17d-4f9f-863c-b10adaffe27f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


not able to read reserved character in query dsl

2014-03-17 Thread Subhadip Bagui
Hi,

I have some document fields with value like "CLOUD_TYPE": "AWS-EC2"

I'm trying below query to get the results, but it's giving me 0 values. 
Please let me know the correct way to fetch the query

POST /aricloud/_search
{
"query": {
"filtered": {
   "query": {
   "bool": {
  "must": [
 {
 "match": {
"NODE_STATUS": "ACTIVE"
 }
  },
  {
  "constant_score": {
 "filter": {
 "term": {
"CLOUD_TYPE": "\"aws\"-ec2"
 }
 },
 "boost": 1.2
  }
  }
  
  ] ,
  "must_not": [
 {
   "term": {
 "NODE_ID": {
  "value": "12235"
}
} 
 }
  ],
  "should": [
 {
"term": {
   "NODE_STATUS": {
  "value": "active"
   }
} 
 }
  ]
   }
   },
   "filter": {
   "range": {
  "NODE_CREATE_TIME": {
 "from": "2014-03-14 16:20:35",
 "to": "2014-03-14 18:43:55"
  }
   }
   }
}
},
"sort": [
   {
  "NODE_ID": {
 "order": "desc"
  }
   }
]
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fa712488-e8b1-4cc8-93c2-2029c4054558%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: not able to pass multiple term in bool query

2014-03-17 Thread Subhadip Bagui
My mistake!! the json format was wrong. tried below and it's working fine.

Thanks David 

> {
> "query": {
> "filtered": {
> "query": {
> "bool": {
> "must": [
> {
> "range": {
> "NODE_CATEGORY_ID": {
> "gte": 10,
> "lte": 20,
> "boost": 2
> }
> }
> }
> ],
> "must_not": [
> {
> "term": {
> "NODE_ID": {
> "value": "12235"
> }
> }
> }
> ],
> "should": [
> {
> "term": {
> "NODE_PRIVATE_IP_ADDRESS": {
> "value": "10.123.124.125"
> }
> }
> },
> {
> "term": {
> "NODE_PUBLIC_IP_ADDRESS": {
> "value": "125.31.108.82"
> }
> }
> }
> ]
> }
> },
> "filter": {
> "range": {
> "NODE_CREATE_TIME": {
> "from": "2014-03-14 16:22:35",
> "to": "2014-03-14 22:43:55"
> }
> }
> }
> }
> },
> "sort": [
> {
> "NODE_ID": {
> "order": "asc"
> }
> }
> ]
> }
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/571c5fe8-3219-46fa-82f4-6d3456e09d07%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


not able to pass multiple term in bool query

2014-03-17 Thread Subhadip Bagui
Hi,

I'm trying to make a bool query where sending mutiple term under should 
block as given in link 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-bool-query.html

But I'm getting SearchPhaseExecutionException while trying to execute below 
query. Please inform what is the correct way to send the query

POST /aricloud/_search
{
"query": {
"filtered": {
   "query": {
   "bool": {
  "must": [
 {
   "range": {
  "NODE_CATEGORY_ID": {
 "gte": 10,
 "lte": 20,
 "boost": 2.0
  }
   }
  }
  ] ,
  "must_not": [
 {
   "term": {
 "NODE_ID": {
  "value": "12235"
}
} 
 }
  ],
  "should": [
 {
"term": {
   "NODE_PRIVATE_IP_ADDRESS": {
  "value": "10.123.124.40"
   }
},
"term": {
   "NODE_PUBLIC_IP_ADDRESS": {
  "value": "125.31.108.72"
   }
}
}
 }
  ]
   }
   },
   "filter": {
   "range": {
  "NODE_CREATE_TIME": {
 "from": "2014-03-14 16:22:35",
 "to": "2014-03-14 22:43:55"
  }
   }
   }
}
},
"sort": [
   {
  "NODE_ID": {
 "order": "asc"
  }
   }
]
}


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fb6b4846-7ceb-4046-a17b-c33de4d7bc79%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: bool query with filter giving error

2014-03-17 Thread Subhadip Bagui
Hi Clinton,

Thanks for your reply. I tried as suggested and the same is working now :) 
One question though, I have to pass the text field in lower case always as 
the same is getting analyzed by standard analyzer I guess. Is there any way 
to pass multiple match in bool for text search so that I can search with 
part of the exact text entered, like "CLOUD" instead of "cloud" 
 
I tried below as given in link 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-multi-match-query.html
but it's giving error as 'Duplicate key "match" - syntax error' 

POST /aricloud/_search
{
"query": {
"filtered": {
   "query": {
   "bool": {
  "must": [
 {"term": {
 "CLOUD_TYPE": {
  "value": "cloudstack"
}
 }
  }
  ] ,
  "must_not": [
 {
   "term": {
 "NODE_ID": {
  "value": "12235"
}
} 
 }
  ],
  "should": [
 {
"match": {
   "NODE_HOSTNAME": "cloudserver.aricent.com"
},
"match": {
   "NODE_GROUP_NAME": "MYSQL"
}
 }
  ]
   }
   },
   "filter": {
   "range": {
  "NODE_CREATE_TIME": {
 "from": "2014-03-14 16:32:35",
 "to": "2014-03-14 18:43:55"
  }
   }
   }
}
},
"sort": [
   {
  "NODE_ID": {
 "order": "desc"
  }
   }
]
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8aac212b-018c-4f51-8222-9bb00a09377d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


bool query with filter giving error

2014-03-14 Thread Subhadip Bagui
Hi,

I'm trying to run the below bool query with filter range to fetch all the 
node data with CLOUD_TYPE="AWS-EC2"" and NODE_STATUS="ACTIVE".
But I'm getting SearchPhaseExecutionException from elasticsearch. Please 
let me know the correct way to do this.

  
curl -XPOST "http://10.203.251.142:9200/aricloud/_search"; -d
'{
"filtered": {
"query": {
"bool": {
"must": [
{
"term": {
"CLOUD_TYPE": "AWS-EC2"
}
},
{
"term": {
"NODE_STATUS": "ACTIVE"
}
}
]
}
},
"filter": {
"range": {
"NODE_CREATE_TIME": {
"to": "2014-03-14 18:43:55",
"from": "2014-03-14 16:22:32"
}
}
}
},
"sort": {
"NODE_ID": "desc"
},
"from": 0,
"size": 3
}'

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4e53acd6-d68a-43fe-8340-72ae695e0060%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


max_score is not coming for query with filter search

2014-03-14 Thread Subhadip Bagui
Hi,

I'm doing the below query to fetch data date wise. But in the result the 
score is coming as null. How to get the score here.

curl -XPOST "10.203.251.142:9200/aricloud/_search" -d
'{
"query": {
"filtered": {
"query" : {
"match" : {
"CLOUD_TYPE" : "AWS-EC2"
}
},
"filter": {
"range": {
"NODE_CREATE_TIME": {
"to": "2014-03-14 18:43:55",
"from": "2014-03-14 16:22:32"
}
}
}
}
},
"sort" : {"NODE_ID" : "desc"},
"from" : 0,
"size" : 3
}'
==>
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 3,
"successful": 3,
"failed": 0
},
"hits": {
"total": 5,
"max_score": null,
"hits": [
{
"_index": "aricloud",
"_type": "nodes",
"_id": "4",
"_score": null,
"_source": {
"NODE_ID": "12334",
"CLOUD_TYPE": "AWS-EC2",
"NODE_GROUP_NAME": "DATABASE",
"NODE_CPU": "5GHZ",
"NODE_HOSTNAME": "virtualnode.aricent.com",
"NODE_NAME": "aws-node4",
"NODE_PRIVATE_IP_ADDRESS": "10.123.124.126",
"NODE_PUBLIC_IP_ADDRESS": "125.31.108.72",
"NODE_INSTANCE_ID": "asw126",
"NODE_STATUS": "STOPPED",
"NODE_CATEGORY_ID": "14",
"NODE_CREATE_TIME": "2014-03-14 16:35:35"
},
"sort": [
"12334"
]
}
]
}
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/21f413c0-1a13-4ebf-9a49-d43e3267e46b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: query filter is not working

2014-03-13 Thread Subhadip Bagui
Hi David,

I have done following steps u suggested. The string search is working now.

But for filter I have to always pass strings in lowercase; where as for 
query text search I can give the proper string sequence inserted in doc. 
query shown below.

May be this is very basic and I'm doing something wrong. I'm a week old on 
elasticsearch and trying to understand the query-sql and text search. Pls 
help to clear the conception.


1) curl -XDELETE http://10.203.251.142:9200/log-2014.03.03

2) 
curl -XPUT http://10.203.251.142:9200/log-2014.03.03/ -d 
'{
"settings": {
"index": {
"number_of_shards": 3,
"number_of_replicas": 0,
"index.cache.field.type": "soft",
"index.refresh_interval": "30s",
"index.store.compress.stored": true
}
},
"mappings": {
"apache-log": {
"properties": {
"message": {
"type": "string",
"fields": {
"actual": {
"type": "string",
"index": "not_analyzed"
}
}
},
"@version": {
"type": "long",
"index": "not_analyzed"
},
"@timestamp": {
"type": "date",
"format": "-MM-dd HH:mm:ss",
"index": "not_analyzed"
},
"type": {
"type": "string",
"index": "not_analyzed"
},
"host": {
"type": "string",
"index": "not_analyzed"
},
"path": {
"type": "string",
"norms": {
"enabled": false
},
"index": "not_analyzed"
}
}
}
}
}'

3) curl -XPUT http://10.203.251.142:9200/_bulk -d '
{ "index": {"_index": "log-2014.03.03", "_type": "apache-log", "_id": "1"}}
{ "message": "03-03-2014 18:39:35,025 DEBUG 
[org.springframework.scheduling.quartz.SchedulerFactoryBean#0_Worker-8]   
com.aricent.aricloud.monitoring.CloudController 121 - 
com.sun.jersey.core.spi.factory.ResponseImpl@1139f1b","@version": 
"1","@timestamp": "2014-03-03 18:39:35","type": "apache-access", "host": 
"cloudclient.aricent.com", "path": 
"/opt/apache-tomcat-7.0.40/logs/aricloud/monitoring.log" }
{ "index": {"_index": "log-2014.03.03", "_type": "apache-log", "_id": "2"}}
{ "message": "\tat org.quartz.core.JobRunShell.run(JobRunShell.java:223)", 
"@version": "1", "@timestamp": "2014-03-03 18:39:36","type": 
"apache-access", "host": "cloudclient.aricent.com", "path": 
"/opt/apache-tomcat-7.0.40/logs/aricloud/monitoring.log" }
{ "index": {"_index": "log-2014.03.03", "_type": "apache-log", "_id": "3"}}
{ "message": "03-03-2014 18:39:35,030  INFO 
[org.springframework.scheduling.quartz.SchedulerFactoryBean#0_Worker-8] 
com.amazonaws.http.HttpClientFactory 128 - Configuring Proxy. Proxy Host: 
10.203.193.227 Proxy Port: 80", "@version": "2", "@timestamp": "2014-03-03 
18:40:35", "type": "apache-access", "host": "cloudclient.aricent.com", 
"path": "/opt/apache-tomcat-7.0.40/logs/aricloud/monitoring.log" }
{ "index": {"_index": "log-2014.03.03", "_type": "apache-log", "_id": "4"}}
{ "message": "\tat 
org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:109)",
 
"@version": "3", "@timestamp": "2014-03-03 18:43:35", "type": 
"apache-access", "host": "cloudclient.aricent.com", "path": 
"/opt/apache-tomcat-7.0.40/logs/aricloud/monitoring.log" }
{ "index": {"_index": "log-2014.03.03", "_type": "apache-log", "_id": "5"}}
{ "message": "03-03-2014 18:45:30,002 DEBUG 
[org.springframework.scheduling.quartz.SchedulerFactoryBean#0_Worker-9] 
com.aricent.aricloud.monitoring.scheduler.SchedulerJob 22 - Entering 
SchedulerJob", "@version": "3", "@timestamp": "2014-03-03 18:45:35", 
"type": "apache-access", "host": "cloudclient.aricent.com", "path": 
"/opt/apache-tomcat-7.0.40/logs/aricloud/monitoring.log" }
\n'

4) curl -XGET 'http://10.203.251.142:9200/log-2014.03.03/_refresh'


5) query ==>
curl -XPOST http://10.203.251.142:9200/log-2014.03.03/_search -d
'{
"query": {
"match": {
"message" : {
"query": "Proxy Port",
"type" : "phrase"
}
  }
   }
}'

null value -
curl -XPOST http://10.203.251.142:9200/log-2014.03.03/_search -d
'{
"query": {
"constant_score": {
"filter": {
"term": { "message": "DEBUG" }
}
}
}
}
'

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussio

Re: query filter is not working

2014-03-13 Thread Subhadip Bagui
Hi David,

I have done following steps u suggested. The exact string search is working 
now.
But when I'm trying the below query for string matching it's giving null 
result.

May this is very basic and I'm doing something wrong. I'm a week old on 
elasticsearch and trying to understand the query-sql and text search. Pls 
help to clear the conception.

create index
create mapping
create a doc
refresh
query
--
query==>
{
"query": {
"constant_score": {
"filter": {
"term": { "message.original": 
"org.apache.http.protocol.immutablehttpprocessor.process" }
}
}
}
}

mapping ==>
{
   
   - "log-2014.03.03":{
  - "mappings":{
 - "apache-log":{
- "properties":{
   - "@timestamp":{
  - "type":"date",
  - "format":"-MM-dd HH:mm:ss"
   },
   - "@version":{
  - "type":"long"
   },
   - "host":{
  - "type":"string",
  - "index":"not_analyzed"
   },
   - "message":{
  - "type":"string",
  - "fields":{
 - "actual":{
- "type":"string",
- "index":"not_analyzed"
 }
  }
   },
   - "path":{
  - "type":"string",
  - "index":"not_analyzed"
   },
   - "type":{
  - "type":"string",
  - "index":"not_analyzed"
   }
}
 }
  }
   }

}

doc ==>
{
   
   - "_index":"log-2014.03.03",
   - "_type":"apache-log",
   - "_id":"5",
   - "_version":1,
   - "found":true,
   - "_source":{
  - "message":"org.apache.http.protocol.ImmutableHttpProcessor.process",
  - "@version":"3",
  - "@timestamp":"2014-03-03 18:45:35",
  - "type":"apache-access",
  - "host":"cloudclient.aricent.com",
  - "path":"/opt/apache-tomcat-7.0.40/logs/aricloud/monitoring.log"
   }

}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/74e0cc0e-25db-47cf-8524-cc1c151a7a76%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: query filter is not working

2014-03-12 Thread Subhadip Bagui
Hi David,

The data is coming through logstash and taking default mapping in 
elasticsearch. Can I do update mapping for that index id ?
Pls let me know.


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/dc119c29-264c-4e23-be8b-9a7ccd18ad47%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: query filter is not working

2014-03-12 Thread Subhadip Bagui
Hi Binh,

The query you given is working. Thanks for your help.
But if I change the query and search for string "requestproxyauthentication" 
instead, It's not working. Below is the mapping for message field.

 I'm trying to understand how elasticsearch analyze the field data. Pls 
comment. 

"message":{
   
   - "type":"string",
   - "norms":{
  - "enabled":false
   }, 
   - "fields":{
  - "raw":{
 - "type":"string",
 - "index":"not_analyzed",
 - "ignore_above":256
  }
   }

}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fb30d7b5-11ab-4ee7-a3cb-647663b11623%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: query filter is not working

2014-03-11 Thread Subhadip Bagui
Hi David,

I have done like below for a test sample.

1. deleted index.
2. create index by following
   curl -XPUT "http://localhost:9200/movies/"; -d
   '{ "index": {"_index": "movies", "_type": "movie", "_id": "1"}}'

3. creating doc 
curl -XPUT "http://localhost:9200/movies/movie/1"; -d
  '{
"title": "The Godfather",
"director": "Francis Ford Coppola",
"year": 1972,
"genres": ["Crime", "Drama"]
  }'

4. creating mapping
curl -XPUT "http://localhost:9200/movies/movie/_mapping"; -d
 '{
   "movie": {
  "properties": {
 "director": {
"type": "multi_field",
"fields": {
"director": {"type": "string"},
"original": {"type" : "string", "index" : "not_analyzed"}
}
 }
  }
   }
 }'

5. query
   curl -XPOST "http://localhost:9200/_search"; -d'
  {
"query": {
"constant_score": {
"filter": {
"term": { "director.original": "Francis Ford Coppola" }
}
}
}
 }'

Please let me know what am I missing. I checked for template.

-Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7a736c32-34e7-4e7d-8a3d-c431c5c42175%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: query filter is not working

2014-03-11 Thread Subhadip Bagui
mapping...default

{
   
   - "logstash-2014.03.03":{
  - "mappings":{
 - "apache-access":{
- "dynamic_templates":[
   - {
  - "string_fields":{
 - "mapping":{
- "type":"multi_field",
- "fields":{
   - "raw":{
  - "index":"not_analyzed",
  - "ignore_above":256,
  - "type":"string"
   },
   - "{name}":{
  - "index":"analyzed",
  - "omit_norms":true,
  - "type":"string"
   }
}
 },
 - "match":"*",
 - "match_mapping_type":"string"
  }
   }
],
- "properties":{
   - "@timestamp":{
  - "type":"date",
  - "format":"dateOptionalTime"
   },
   - "@version":{
  - "type":"string",
  - "index":"not_analyzed"
   },
   - "geoip":{
  - "dynamic":"true",
  - "properties":{
 - "location":{
- "type":"geo_point"
 }
  }
   },
   - "host":{
  - "type":"string",
  - "norms":{
 - "enabled":false
  },
  - "fields":{
 - "raw":{
- "type":"string",
- "index":"not_analyzed",
- "ignore_above":256
 }
  }
   },
   - "message":{
  - "type":"string",
  - "norms":{
 - "enabled":false
  },
  - "fields":{
 - "raw":{
- "type":"string",
- "index":"not_analyzed",
- "ignore_above":256
 }
  }
   },
   - "path":{
  - "type":"string",
  - "norms":{
 - "enabled":false
  },
  - "fields":{
 - "raw":{
- "type":"string",
- "index":"not_analyzed",
- "ignore_above":256
 }
  }
   },
   - "query":{
  - "properties":{
 - "constant_score":{
- "properties":{
   - "filter":{
  - "properties":{
 - "term":{
- "properties":{
   - "@timestamp":{
  - "type":"date",
  - "format":"dateOptionalTime"
   }
}
 }
  }
   }
}
 }
  }
   },
   - "type":{
  - "type":"string",
  - "norms":{
 - "enabled":false
  },
  - "fields":{
 - "raw":{
- "type":"string",
- "index":"not_analyzed",
- "ignore_above":256
 }
  }
   }
}
 }
  }
   }

}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f3873f82-8000-4f35-8a3e-b2e29d7410f5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: query filter is not working

2014-03-11 Thread Subhadip Bagui
mapping...

{
   
   - "movies":{
  - "mappings":{
 - "movie":{
- "properties":{
   - "director":{
  - "type":"string",
  - "fields":{
 - "original":{
- "type":"string",
- "index":"not_analyzed"
 }
  }
   },
   - "genres":{
  - "type":"string"
   },
   - "query":{
  - "properties":{
 - "query_string":{
- "properties":{
   - "query":{
  - "type":"string"
   }
}
 }
  }
   },
   - "title":{
  - "type":"string"
   },
   - "year":{
  - "type":"long"
   }
}
 }
  }
   }

}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5ea73df0-1ed6-4b4e-823f-ebc2bed6e46b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: query filter is not working

2014-03-11 Thread Subhadip Bagui
Hi David,

Trying to query as following, but still getting null result. Please suggest.

{
"query": {
"constant_score": {
"filter": {
"term": { "message": "requestproxyauthentication" }
}
}
}
}


-Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a9d10f48-d5fd-4578-b885-cfe0eb0fa3e1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: elasticsearch is taking more memory than allocated heap when using bootstrap.mlockall: true

2014-03-11 Thread Subhadip Bagui


Hi,

Actually if I fix the ES_MAX_MEM with some value elasticsearch is starting 
with max of that memory usage only. But when used *bootstrap.mlockall: true 
*and start elasticsearch it's taking more memory than ES_MAX_MEM. 

What I'm trying to understand if how much memory elasticsearch will occupy 
for process memory pages so that I can plan for the RAM usage.

-Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d3f91133-8867-4d1f-a552-dfad0ab6c642%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


query filter is not working

2014-03-11 Thread Subhadip Bagui
Hi,

I've a index in my elasticsearch like below

{
   
   - "_index":"logstash-2014.03.03",
   - "_type":"apache-access",
   - "_id":"snCPRnSHSvm_aaeuHxB84w",
   - "_version":1,
   - "found":true,
   - "_source":{
  - "message":"\tat 
  
org.apache.http.client.protocol.RequestProxyAuthentication.process(RequestProxyAuthentication.java:89)"
  ,
  - "@version":"1",
  - "@timestamp":"2014-03-03T18:39:35.425+05:30",
  - "type":"apache-access",
  - "host":"cloudclient.aricent.com",
  - "path":"/opt/apache-tomcat-7.0.40/logs/aricloud/monitoring.log"
   }

}

I’m trying to qrery the data using filter and below is the query

{
"query": {
"constant_score": {
"filter": {
"term": { "message": "RequestProxyAuthentication" }
}
}
}
}

but the same is giving be null result from index search.
{
   
   - "took":275,
   - "timed_out":false,
   - "_shards":{
  - "total":5,
  - "successful":5,
  - "failed":0
   },
   - "hits":{
  - "total":0,
  - "max_score":null,
  - "hits":[
 ]
   }

}

Kindly let mw know how to pass the query string

-Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/586cb192-0abe-443b-9d4c-3d0669b89bcd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch is taking more memory than allocated heap when using bootstrap.mlockall: true

2014-03-10 Thread Subhadip Bagui


Hi,

I’ve allocated max and min heap same with *ES_HEAP_SIZE=1200m* in 
elasticsearch and using *bootstrap.mlockall: true *as suggested by 
elasticsearch so that process memory won’t get swapped.

But when elasticsearch I start its taking more memory than max heap 
mentioned; like 1.4g and holding the same memory. The usage is not getting 
fluctuating now, which is a good thing. 

Can you please explain why its taking more memory for JVM than allocated.

--Subhadip


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/98cfaf5a-4559-4b7e-80ac-816626690e49%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Parse Failure [Failed to parse source...] - while doing string search in elasticsearch

2014-03-10 Thread Subhadip Bagui
Thanks Binh. It's working now...

On Thursday, March 6, 2014 1:56:26 PM UTC+5:30, Subhadip Bagui wrote:
>
> Hi,
>
> I'm new in elasticsearch and need some help. I'm getting below exception 
> while trying to search a string in elasticsearch using Kibana. Below are 
> the components I'm using.
> 1. logstash 1.3.3
> 2. redis : 2.4.10
> 3. elasticsearch: 1.0
> 4.kibana
>
> I'm running elasticsearch with default, single cluster and 1 node. Please 
> help to resolve this issue.
> --
> elasticsearch conf:
> ##
> bootstrap.mlockall: true
> indices.cache.filter.size: 20%
> index.cache.field.max_size: 5
> index.cache.field.expire: 10m
> indices.fielddata.cache.size: 30%
> indices.fielddata.cache.expire: -1
> indices.memory.index_buffer_size: 50%
> index.refresh_interval: 30s
> index.translog.flush_threshold_ops: 5
> index.store.compress.stored: true
>
> threadpool.search.type: fixed
> threadpool.search.size: 20
> threadpool.search.queue_size: 100
>
> threadpool.index.type: fixed
> threadpool.index.size: 30
> threadpool.index.queue_size: 200
>
> threadpool.bulk.type: fixed
> threadpool.bulk.size: 60
> threadpool.bulk.queue_size: 300
> ##
>
> 
> [2014-03-06 13:27:08,235][DEBUG][action.search.type   ] [Katherine 
> "Kitty" Pryde] [logstash-2014.03.06][1], node[QUrNRW5cSi6IS3P_BCZkHw], [P], 
> s[STARTED]: Failed to execute 
> [org.elasticsearch.action.search.SearchRequest@1c03a4f] lastShard [true]
> org.elasticsearch.search.SearchParseException: [logstash-2014.03.06][1]: 
> from[-1],size[-1]: Parse Failure [Failed to parse source 
> [{"query":{"filtered":{"query":{"bool":{"should":[{"query_string":{"query":"/
> ec2.ap-southeast-1.amazonaws.com
> "}}]}},"filter":{"bool":{"must":[{"match_all":{}},{"range":{"@timestamp":{"from":1394006201505,"to":"now"}}},{"bool":{"must":[{"match_all":{}}]}}],"highlight":{"fields":{},"fragment_size":2147483647,"pre_tags":["@start-highlight@"],"post_tags":["@end-highlight@"]},"size":2,"sort":[{"@timestamp":{"order":"desc"}}]}]]
> at 
> org.elasticsearch.search.SearchService.parseSource(SearchService.java:586)
> at 
> org.elasticsearch.search.SearchService.createContext(SearchService.java:489)
> at 
> org.elasticsearch.search.SearchService.createContext(SearchService.java:474)
> at 
> org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:467)
> at 
> org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:239)
> at 
> org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202)
> at 
> org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)
> at 
> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)
> at 
> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)
> at 
> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.elasticsearch.index.query.QueryParsingException: 
> [logstash-2014.03.06] Failed to parse query [/
> ec2.ap-southeast-1.amazonaws.com]
> at 
> org.elasticsearch.index.query.QueryStringQueryParser.parse(QueryStringQueryParser.java:235)
> at 
> org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:223)
> at 
> org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:107)
> at 
> org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:223)
> at 
> org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:71)
> at 
> org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:223)
> at 
> org.elasticsea

Re: Parse Failure [Failed to parse source...] - while doing string search in elasticsearch

2014-03-07 Thread Subhadip Bagui
Hi,

I haven't used elasticsarch api for search results. I'm searchig the string 
using kibana gui. Please let me know how to enable special charter seach 
using kibana.

---

On Thursday, March 6, 2014 1:56:26 PM UTC+5:30, Subhadip Bagui wrote:
>
> Hi,
>
> I'm new in elasticsearch and need some help. I'm getting below exception 
> while trying to search a string in elasticsearch using Kibana. Below are 
> the components I'm using.
> 1. logstash 1.3.3
> 2. redis : 2.4.10
> 3. elasticsearch: 1.0
> 4.kibana
>
> I'm running elasticsearch with default, single cluster and 1 node. Please 
> help to resolve this issue.
> --
> elasticsearch conf:
> ##
> bootstrap.mlockall: true
> indices.cache.filter.size: 20%
> index.cache.field.max_size: 5
> index.cache.field.expire: 10m
> indices.fielddata.cache.size: 30%
> indices.fielddata.cache.expire: -1
> indices.memory.index_buffer_size: 50%
> index.refresh_interval: 30s
> index.translog.flush_threshold_ops: 5
> index.store.compress.stored: true
>
> threadpool.search.type: fixed
> threadpool.search.size: 20
> threadpool.search.queue_size: 100
>
> threadpool.index.type: fixed
> threadpool.index.size: 30
> threadpool.index.queue_size: 200
>
> threadpool.bulk.type: fixed
> threadpool.bulk.size: 60
> threadpool.bulk.queue_size: 300
> ##
>
> 
> [2014-03-06 13:27:08,235][DEBUG][action.search.type   ] [Katherine 
> "Kitty" Pryde] [logstash-2014.03.06][1], node[QUrNRW5cSi6IS3P_BCZkHw], [P], 
> s[STARTED]: Failed to execute 
> [org.elasticsearch.action.search.SearchRequest@1c03a4f] lastShard [true]
> org.elasticsearch.search.SearchParseException: [logstash-2014.03.06][1]: 
> from[-1],size[-1]: Parse Failure [Failed to parse source 
> [{"query":{"filtered":{"query":{"bool":{"should":[{"query_string":{"query":"/
> ec2.ap-southeast-1.amazonaws.com
> "}}]}},"filter":{"bool":{"must":[{"match_all":{}},{"range":{"@timestamp":{"from":1394006201505,"to":"now"}}},{"bool":{"must":[{"match_all":{}}]}}],"highlight":{"fields":{},"fragment_size":2147483647,"pre_tags":["@start-highlight@"],"post_tags":["@end-highlight@"]},"size":2,"sort":[{"@timestamp":{"order":"desc"}}]}]]
> at 
> org.elasticsearch.search.SearchService.parseSource(SearchService.java:586)
> at 
> org.elasticsearch.search.SearchService.createContext(SearchService.java:489)
> at 
> org.elasticsearch.search.SearchService.createContext(SearchService.java:474)
> at 
> org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:467)
> at 
> org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:239)
> at 
> org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202)
> at 
> org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)
> at 
> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)
> at 
> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)
> at 
> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.elasticsearch.index.query.QueryParsingException: 
> [logstash-2014.03.06] Failed to parse query [/
> ec2.ap-southeast-1.amazonaws.com]
> at 
> org.elasticsearch.index.query.QueryStringQueryParser.parse(QueryStringQueryParser.java:235)
> at 
> org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:223)
> at 
> org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:107)
> at 
> org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:223)
> at 
> org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:71)
> at 
> org.elasti

Parse Failure [Failed to parse source...] - while doing string search in elasticsearch

2014-03-06 Thread Subhadip Bagui
Hi,

I'm new in elasticsearch and need some help. I'm getting below exception 
while trying to search a string in elasticsearch using Kibana. Below are 
the components I'm using.
1. logstash 1.3.3
2. redis : 2.4.10
3. elasticsearch: 1.0
4.kibana

I'm running elasticsearch with default, single cluster and 1 node. Please 
help to resolve this issue.
--
elasticsearch conf:
##
bootstrap.mlockall: true
indices.cache.filter.size: 20%
index.cache.field.max_size: 5
index.cache.field.expire: 10m
indices.fielddata.cache.size: 30%
indices.fielddata.cache.expire: -1
indices.memory.index_buffer_size: 50%
index.refresh_interval: 30s
index.translog.flush_threshold_ops: 5
index.store.compress.stored: true

threadpool.search.type: fixed
threadpool.search.size: 20
threadpool.search.queue_size: 100

threadpool.index.type: fixed
threadpool.index.size: 30
threadpool.index.queue_size: 200

threadpool.bulk.type: fixed
threadpool.bulk.size: 60
threadpool.bulk.queue_size: 300
##


[2014-03-06 13:27:08,235][DEBUG][action.search.type   ] [Katherine 
"Kitty" Pryde] [logstash-2014.03.06][1], node[QUrNRW5cSi6IS3P_BCZkHw], [P], 
s[STARTED]: Failed to execute 
[org.elasticsearch.action.search.SearchRequest@1c03a4f] lastShard [true]
org.elasticsearch.search.SearchParseException: [logstash-2014.03.06][1]: 
from[-1],size[-1]: Parse Failure [Failed to parse source 
[{"query":{"filtered":{"query":{"bool":{"should":[{"query_string":{"query":"/ec2.ap-southeast-1.amazonaws.com"}}]}},"filter":{"bool":{"must":[{"match_all":{}},{"range":{"@timestamp":{"from":1394006201505,"to":"now"}}},{"bool":{"must":[{"match_all":{}}]}}],"highlight":{"fields":{},"fragment_size":2147483647,"pre_tags":["@start-highlight@"],"post_tags":["@end-highlight@"]},"size":2,"sort":[{"@timestamp":{"order":"desc"}}]}]]
at 
org.elasticsearch.search.SearchService.parseSource(SearchService.java:586)
at 
org.elasticsearch.search.SearchService.createContext(SearchService.java:489)
at 
org.elasticsearch.search.SearchService.createContext(SearchService.java:474)
at 
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:467)
at 
org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:239)
at 
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202)
at 
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.elasticsearch.index.query.QueryParsingException: 
[logstash-2014.03.06] Failed to parse query 
[/ec2.ap-southeast-1.amazonaws.com]
at 
org.elasticsearch.index.query.QueryStringQueryParser.parse(QueryStringQueryParser.java:235)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:223)
at 
org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:107)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:223)
at 
org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:71)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:223)
at 
org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:321)
at 
org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:260)
at 
org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)
at 
org.elasticsearch.search.SearchService.parseSource(SearchService.java:574)
... 12 more
Caused by: org.apache.lucene.queryparser.classic.ParseException: Cannot 
parse '/ec2.ap-southeast-1.amazonaws.com': Lexical error at line 1, column 
34.  Encountered:  after : "/ec2.ap-southeast-1.amazonaws.com"
at 
org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:130)
at 
org.apache.lucene.queryparser.classic.MapperQueryParser.parse(MapperQueryParser.java:882)
at 
org.elasticsearch.index.query.QueryStringQueryParser.parse(QueryStringQueryParser.java:218)
... 21 more 

-- 
You received this mess

Re: elasticsearch is giving error as "failed to start shard"

2014-03-03 Thread Subhadip Bagui


Hi,

I'm using elastic search 1.0 and logstash 1.3.3 with redis_version:2.4.10
When ever I'm trying to start logstash indexer it's giving me below error 
..."failed to start shard" I deleted all the indices from elastic search 
before start.

[2014-03-03 17:04:33,178][WARN ][transport.netty  ] [Mantra] 
Message not fully read (response) for [2114] handler 
future(org.elasticsearch.indices.recovery.RecoveryTarget$4@14487c8), error 
[true], resetting

[2014-03-03 17:04:33,183][WARN ][indices.cluster  ] [Mantra] 
[logstash-2014.03.03][3] failed to start shard

org.elasticsearch.indices.recovery.RecoveryFailedException: 
[logstash-2014.03.03][3]: Recovery failed from 
[Brutacus][L0THepPDR6Cmx7O2jqVLdg][cloudclient1.aricent.com][inet[/10.203.251.143:9300]]
 
into 
[Mantra][cKMntkkDT5Ws8g8_yeoTUg][cloudclient.aricent.com][inet[/10.203.251.142:9300]]

at 
org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:303)

at 
org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:65)

at 
org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:171)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)

at java.lang.Thread.run(Thread.java:662)

---

Please help to resolve the issue.

Thanks,
Subhadip

On Saturday, March 1, 2014 2:58:35 PM UTC+5:30, Subhadip Bagui wrote:
>
> Hi ,
>
> I'm trying to move elastic search on a new system, but while trying to 
> start the service its giving below exception. Althrough the process is 
> running. There was old elasticsearch 0.90 copied which is not running now 
> as I checked. Please suggest.
>
> [2014-02-28 16:40:13,588][WARN ][transport.netty  ] [DJ] Message 
> not fully read (response) for [4] handler 
> future(org.elasticsearch.indices.recovery.RecoveryTarget$4@1b4268b), error 
> [true], resetting
> [2014-02-28 16:40:13,589][WARN ][transport.netty  ] [DJ] Message 
> not fully read (response) for [3] handler 
> future(org.elasticsearch.indices.recovery.RecoveryTarget$4@151d6cb), error 
> [true], resetting
> [2014-02-28 16:40:13,590][WARN ][indices.cluster  ] [DJ] 
> [logstash-2014.02.27][1] failed to start shard
> org.elasticsearch.indices.recovery.RecoveryFailedException: 
> [logstash-2014.02.27][1]: Recovery failed from 
> [Brutacus][L0THepPDR6Cmx7O2jqVLdg][cloudclient1.aricent.com][inet[/
> 10.203.251.143:9300]] into [DJ][LajLfvlYSAGZdsyFnorwEA][
> cloudclient.aricent.com][inet[/10.203.251.142:9300]]
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:303)
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:65)
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:171)
>
> Thanks,
> Subahdip
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d66ab1d6-8cb2-4019-9cc3-2054328d8864%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: elasticsearch is giving error as "failed to start shard"

2014-03-03 Thread Subhadip Bagui
Hi,

When I deleted all in index using rest then elasticsearch is getting up 
fine. Any help to identify the issue. 

Thanks

On Saturday, March 1, 2014 2:58:35 PM UTC+5:30, Subhadip Bagui wrote:
>
> Hi ,
>
> I'm trying to move elastic search on a new system, but while trying to 
> start the service its giving below exception. Althrough the process is 
> running. There was old elasticsearch 0.90 copied which is not running now 
> as I checked. Please suggest.
>
> [2014-02-28 16:40:13,588][WARN ][transport.netty  ] [DJ] Message 
> not fully read (response) for [4] handler 
> future(org.elasticsearch.indices.recovery.RecoveryTarget$4@1b4268b), error 
> [true], resetting
> [2014-02-28 16:40:13,589][WARN ][transport.netty  ] [DJ] Message 
> not fully read (response) for [3] handler 
> future(org.elasticsearch.indices.recovery.RecoveryTarget$4@151d6cb), error 
> [true], resetting
> [2014-02-28 16:40:13,590][WARN ][indices.cluster  ] [DJ] 
> [logstash-2014.02.27][1] failed to start shard
> org.elasticsearch.indices.recovery.RecoveryFailedException: 
> [logstash-2014.02.27][1]: Recovery failed from 
> [Brutacus][L0THepPDR6Cmx7O2jqVLdg][cloudclient1.aricent.com][inet[/
> 10.203.251.143:9300]] into [DJ][LajLfvlYSAGZdsyFnorwEA][
> cloudclient.aricent.com][inet[/10.203.251.142:9300]]
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:303)
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:65)
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:171)
>
> Thanks,
> Subahdip
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9ef1288f-c9bc-4671-8de4-71938e7429f4%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: elasticsearch is giving error as "failed to start shard"

2014-03-02 Thread Subhadip Bagui
Hi, 

Please suggest how to resolve the issue. 

Thanks, 
Subhadip

On Saturday, March 1, 2014 2:58:35 PM UTC+5:30, Subhadip Bagui wrote:
>
> Hi ,
>
> I'm trying to move elastic search on a new system, but while trying to 
> start the service its giving below exception. Althrough the process is 
> running. There was old elasticsearch 0.90 copied which is not running now 
> as I checked. Please suggest.
>
> [2014-02-28 16:40:13,588][WARN ][transport.netty  ] [DJ] Message 
> not fully read (response) for [4] handler 
> future(org.elasticsearch.indices.recovery.RecoveryTarget$4@1b4268b), error 
> [true], resetting
> [2014-02-28 16:40:13,589][WARN ][transport.netty  ] [DJ] Message 
> not fully read (response) for [3] handler 
> future(org.elasticsearch.indices.recovery.RecoveryTarget$4@151d6cb), error 
> [true], resetting
> [2014-02-28 16:40:13,590][WARN ][indices.cluster  ] [DJ] 
> [logstash-2014.02.27][1] failed to start shard
> org.elasticsearch.indices.recovery.RecoveryFailedException: 
> [logstash-2014.02.27][1]: Recovery failed from 
> [Brutacus][L0THepPDR6Cmx7O2jqVLdg][cloudclient1.aricent.com][inet[/
> 10.203.251.143:9300]] into [DJ][LajLfvlYSAGZdsyFnorwEA][
> cloudclient.aricent.com][inet[/10.203.251.142:9300]]
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:303)
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:65)
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:171)
>
> Thanks,
> Subahdip
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7a6843bd-1aa0-4a9b-8a48-c4b5a6a65907%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: elasticsearch is giving error as "failed to start shard"

2014-03-02 Thread Subhadip Bagui
Hi,

Please help to fix the issue.


On Saturday, March 1, 2014 2:58:35 PM UTC+5:30, Subhadip Bagui wrote:
>
> Hi ,
>
> I'm trying to move elastic search on a new system, but while trying to 
> start the service its giving below exception. Althrough the process is 
> running. There was old elasticsearch 0.90 copied which is not running now 
> as I checked. Please suggest.
>
> [2014-02-28 16:40:13,588][WARN ][transport.netty  ] [DJ] Message 
> not fully read (response) for [4] handler 
> future(org.elasticsearch.indices.recovery.RecoveryTarget$4@1b4268b), error 
> [true], resetting
> [2014-02-28 16:40:13,589][WARN ][transport.netty  ] [DJ] Message 
> not fully read (response) for [3] handler 
> future(org.elasticsearch.indices.recovery.RecoveryTarget$4@151d6cb), error 
> [true], resetting
> [2014-02-28 16:40:13,590][WARN ][indices.cluster  ] [DJ] 
> [logstash-2014.02.27][1] failed to start shard
> org.elasticsearch.indices.recovery.RecoveryFailedException: 
> [logstash-2014.02.27][1]: Recovery failed from 
> [Brutacus][L0THepPDR6Cmx7O2jqVLdg][cloudclient1.aricent.com][inet[/
> 10.203.251.143:9300]] into [DJ][LajLfvlYSAGZdsyFnorwEA][
> cloudclient.aricent.com][inet[/10.203.251.142:9300]]
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:303)
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:65)
> at 
> org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:171)
>
> Thanks,
> Subahdip
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e3936205-8ba2-4bd9-a76c-c5f18142e1c6%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


elasticsearch is giving error as "failed to start shard"

2014-03-01 Thread Subhadip Bagui
Hi ,

I'm trying to move elastic search on a new system, but while trying to 
start the service its giving below exception. Althrough the process is 
running. There was old elasticsearch 0.90 copied which is not running now 
as I checked. Please suggest.

[2014-02-28 16:40:13,588][WARN ][transport.netty  ] [DJ] Message 
not fully read (response) for [4] handler 
future(org.elasticsearch.indices.recovery.RecoveryTarget$4@1b4268b), error 
[true], resetting
[2014-02-28 16:40:13,589][WARN ][transport.netty  ] [DJ] Message 
not fully read (response) for [3] handler 
future(org.elasticsearch.indices.recovery.RecoveryTarget$4@151d6cb), error 
[true], resetting
[2014-02-28 16:40:13,590][WARN ][indices.cluster  ] [DJ] 
[logstash-2014.02.27][1] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException: 
[logstash-2014.02.27][1]: Recovery failed from 
[Brutacus][L0THepPDR6Cmx7O2jqVLdg][cloudclient1.aricent.com][inet[/10.203.251.143:9300]]
 
into 
[DJ][LajLfvlYSAGZdsyFnorwEA][cloudclient.aricent.com][inet[/10.203.251.142:9300]]
at 
org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:303)
at 
org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:65)
at 
org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:171)

Thanks,
Subahdip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1ee34fd2-27f2-45bb-82d0-f11cdf07aaf2%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: elasticsearch outofmemory error

2014-02-28 Thread Subhadip Bagui
Hi David,

I'm trying to move elastic search on a new system, but while trying to 
start the service its giving below exception. Althrough the process is 
running. There was old elasticsearch 0.90 copied which is not running now 
as I checked. Please suggest.

[2014-02-28 16:40:13,588][WARN ][transport.netty  ] [DJ] Message 
not fully read (response) for [4] handler 
future(org.elasticsearch.indices.recovery.RecoveryTarget$4@1b4268b), error 
[true], resetting
[2014-02-28 16:40:13,589][WARN ][transport.netty  ] [DJ] Message 
not fully read (response) for [3] handler 
future(org.elasticsearch.indices.recovery.RecoveryTarget$4@151d6cb), error 
[true], resetting
[2014-02-28 16:40:13,590][WARN ][indices.cluster  ] [DJ] 
[logstash-2014.02.27][1] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException: 
[logstash-2014.02.27][1]: Recovery failed from 
[Brutacus][L0THepPDR6Cmx7O2jqVLdg][cloudclient1.aricent.com][inet[/10.203.251.143:9300]]
 
into 
[DJ][LajLfvlYSAGZdsyFnorwEA][cloudclient.aricent.com][inet[/10.203.251.142:9300]]
at 
org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:303)
at 
org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:65)
at 
org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:171)

--
Thanks
Subhadip

On Friday, February 28, 2014 3:45:18 PM UTC+5:30, Subhadip Bagui wrote:
>
> Hi,
>
> I'm getting the below issue when trying to increase paging in Kibana which 
> is using elasticsearch.
>
> [2014-02-28 11:24:29,280][DEBUG][action.search.type ] [Nut] [182865] 
> Failed to execute fetch phase
> org.elasticsearch.ElasticsearchException: Java heap space
> at 
> org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:37)
> at 
> org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:451)
> at 
> org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:406)
> at 
> org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.executeFetch(TransportSearchQueryThenFetchAction.java:150)
> at 
> org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$2.run(TransportSearchQueryThenFetchAction.java:134)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
>
>
> Getting the below error also in elastic search log.
>
> [2014-02-28 11:29:55,121][WARN ][cluster.action.shard ] [Nut] 
> [logstash-2014.02.28][3] sending failed shard for [logstash-2014.02.28][3], 
> node[3DLIN4cCRPmMM_6a_1yTCA], [P], s[STARTED], indexUUID 
> [bWO_8-eUQQ6WDxcl2KoUkw], reason [engine failure, message 
> [MergeException[java.lang.OutOfMemoryError: Java heap space]; nested: 
> OutOfMemoryError[Java heap space]; ]]
>
> My Heap allocation in elastic search is
>
> ES_HEAP_SIZE=256m
>
> ES_DIRECT_SIZE=512m
>
> Please help to rectify the same.
> Subhadip
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d5c38f22-b2a6-4f7e-ba37-a4de3630db0f%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


elasticsearch outofmemory error

2014-02-28 Thread Subhadip Bagui


Hi,

I'm getting the below issue when trying to increase paging in Kibana which 
is using elasticsearch.

[2014-02-28 11:24:29,280][DEBUG][action.search.type ] [Nut] [182865] Failed 
to execute fetch phase
org.elasticsearch.ElasticsearchException: Java heap space
at 
org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:37)
at 
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:451)
at 
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:406)
at 
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.executeFetch(TransportSearchQueryThenFetchAction.java:150)
at 
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$2.run(TransportSearchQueryThenFetchAction.java:134)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Getting the below error also in elastic search log.

[2014-02-28 11:29:55,121][WARN ][cluster.action.shard ] [Nut] 
[logstash-2014.02.28][3] sending failed shard for [logstash-2014.02.28][3], 
node[3DLIN4cCRPmMM_6a_1yTCA], [P], s[STARTED], indexUUID 
[bWO_8-eUQQ6WDxcl2KoUkw], reason [engine failure, message 
[MergeException[java.lang.OutOfMemoryError: Java heap space]; nested: 
OutOfMemoryError[Java heap space]; ]]

My Heap allocation in elastic search is

ES_HEAP_SIZE=256m

ES_DIRECT_SIZE=512m

Please help to rectify the same.
Subhadip

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/dbf6a5e5-f722-4414-92e3-72268f6336f2%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.