Re: Marvel rest.action.multi.allow_explicit_index: false
Hi Antonios, A short note that Marvel 1.3.0 was released with this fix. Cheers, Boaz On Saturday, November 15, 2014 6:12:13 PM UTC+1, Boaz Leskes wrote: Hi Antonios, It's been a while but I wanted to let you know that this has bbeen fixed and will be available in this marvel release. cheers, Boaz On Sunday, March 9, 2014 10:40:22 PM UTC+1, Boaz Leskes wrote: Hi Antonios, Marvel uses the rest bulk indexing API to send it's data. It currently uses the standard format where the index is specified on every item in the request. The setting you mention disallows it. I made a note to change this behavior in Marvel as all the items in the body use the same index and so we could move it to the url. Until this is changed, you'd have to indeed disable this setting on the cluster you intend to store marvel data in. Cheers, Boaz On Thursday, March 6, 2014 1:07:25 PM UTC+1, Antonios Chalkiopoulos wrote: To prevent users from overriding the index which has been specified in the URL i set rest.action.multi.allow_explicit_index: false in elasticsearch.yml (using es - version 1.0.1) After restarting elasticseach i am getting errors in the logs coming from Marvel java.io.IOException: Server returned HTTP response code: 400 for URL: http://localhost:9200/_bulk [2014-03-06 12:05:10,305][ERROR][marvel.agent.exporter] error sending data I'm just guessing that in a clustered environment i can have insecure rest API - in the ES server that marvel will be talking to and a secure rest API in the ES server that the clients will be talking to... Any ideas or experiences with securing ES while having also Marvel for monitoring ?! -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/deafcb52-a633-488a-bffa-2c184ce7418b%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: marvel.agent.exporter getting 500 back from logging cluster
You are probably running into https://github.com/FasterXML/jackson-dataformat-smile/issues/18, which was fixed by upgrading jackson in ES with https://github.com/elasticsearch/elasticsearch/pull/7327 (version 1.4.0) Cheers, Boaz On Tuesday, December 9, 2014 4:11:00 AM UTC+1, Lane Harris wrote: Hello all We are running Marvel 1.2.1 on ES 1.3.4/Java 1.7.55 with a separate logging cluster that marvel.agent.exporter.es.hosts points to. The logging cluster is running the same bits as our production cluster with marvel.agent.enabled set to false. About a week ago, we stopped seeing events for the Cluster Pulse and Shard Allocation dashboards. Whenever one of the events tracked on these dashboards occurs, the current master spits out the following log message: [2014-12-09 02:54:59,220][ERROR][marvel.agent.exporter] [T02-C01-M03] remote target didn't respond with 200 OK response code [500 Internal Server Error]. content: [:) ?error?JsonParseException[Invalid shared name reference 293; only got 0 names in buffer (invalid content) at [Source: org.elasticsearch.transport.netty.ChannelBufferStreamInput@3fc84b46; line: -1, column: 4]]?status$ ??] On the logging cluster, the relevant types for these pages (node_event, shard_event, etc) dried up at the same time the errors started appearing in the logs. I checked the both the Marvel template and the actual mappings - no differences between the two days. Has anyone seen a similar issue? Lane -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/78ae2a44-7900-46c4-ad47-51698455dd76%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: Marvel Sense is not working over https
A short note that the work around is not needed anymore with marvel 1.3.0. Cheers, Boaz On Wednesday, December 3, 2014 7:11:07 AM UTC+1, Boaz Leskes wrote: Copy pasting from the relevant GitHub issue for future reference: ( https://github.com/elasticsearch/elasticsearch/issues/8735) In the next marvel release we will have automatic support for this. I wonder if things would work for you if enter https://mydomain/ in the server box of sense. It doesn't indeed listen to the config.js file of kibana. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/014eaa20-7125-414e-905c-38d0c5ef1b89%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: SearchParseException (marvel) - [No mapping found for [@timestamp] in order to sort on
I would check your ES installation. This information comes from the Node Stats API, so it is likely missing there as well. Most of the time this is caused by missing the Sigar library or some process permission issue. Cheers, Boaz On Sunday, December 14, 2014 8:42:22 PM UTC+1, Eugen Paraschiv wrote: One more detail on this - the Marvel UI also displays the exact query that's failing. Running that query results in a more informative message - probably the root cause of the problem: Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet [fs.total.available_in_bytes]: failed to find mapping for fs.total. available_in_bytes And indeed, the *fs.total.available_in_bytes* isn't available in in the marvel index. Now - the reading is available on *_nodes/stats* - so I'm assuming it's a mapping problem from Marvel. But - I removed the old mapping, restarted the entire cluster and basically allowed marvel to re-create the mapping it needs - so that should be correct. Any help is appreciated. Thanks. On Sunday, December 14, 2014 6:22:22 PM UTC+2, Eugen Paraschiv wrote: Hi, I'm using Elasticsearch 1.4.1 and the latest Marvel (1.2.1). I have Marvel installed on every node of the cluster and generating data into the daily index. When going into Marvel, I get the following exception: Caused by: org.elasticsearch.search.SearchParseException: [.marvel- 2014.12.14][0]: from[-1],size[1]: Parse Failure [No mapping found for [ @timestamp] in order to sort on] So - this is referring specifically to the *.marvel-2014.12.14* index - an index created by Marvel itself, which should thus have the right structure. Am I missing something related to the Marvel setup? Thank you, Eugen. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/81834d9b-546c-4ef9-80d9-a53ba4dd9bc2%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: SearchParseException (marvel) - [No mapping found for [@timestamp] in order to sort on
The strange thing is that - for some reason, just by leaving it for a couple of hours, it started working. I'm assuming there was some data missing but being written in the marvel index. Thanks, Eugen. On Sat, Dec 20, 2014 at 10:58 AM, Boaz Leskes b.les...@gmail.com wrote: I would check your ES installation. This information comes from the Node Stats API, so it is likely missing there as well. Most of the time this is caused by missing the Sigar library or some process permission issue. Cheers, Boaz On Sunday, December 14, 2014 8:42:22 PM UTC+1, Eugen Paraschiv wrote: One more detail on this - the Marvel UI also displays the exact query that's failing. Running that query results in a more informative message - probably the root cause of the problem: Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet [fs.total.available_in_bytes]: failed to find mapping for fs.total. available_in_bytes And indeed, the *fs.total.available_in_bytes* isn't available in in the marvel index. Now - the reading is available on *_nodes/stats* - so I'm assuming it's a mapping problem from Marvel. But - I removed the old mapping, restarted the entire cluster and basically allowed marvel to re-create the mapping it needs - so that should be correct. Any help is appreciated. Thanks. On Sunday, December 14, 2014 6:22:22 PM UTC+2, Eugen Paraschiv wrote: Hi, I'm using Elasticsearch 1.4.1 and the latest Marvel (1.2.1). I have Marvel installed on every node of the cluster and generating data into the daily index. When going into Marvel, I get the following exception: Caused by: org.elasticsearch.search.SearchParseException: [.marvel- 2014.12.14][0]: from[-1],size[1]: Parse Failure [No mapping found for [ @timestamp] in order to sort on] So - this is referring specifically to the *.marvel-2014.12.14* index - an index created by Marvel itself, which should thus have the right structure. Am I missing something related to the Marvel setup? Thank you, Eugen. -- You received this message because you are subscribed to a topic in the Google Groups elasticsearch group. To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/CHat_7rnqxc/unsubscribe. To unsubscribe from this group and all its topics, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/81834d9b-546c-4ef9-80d9-a53ba4dd9bc2%40googlegroups.com https://groups.google.com/d/msgid/elasticsearch/81834d9b-546c-4ef9-80d9-a53ba4dd9bc2%40googlegroups.com?utm_medium=emailutm_source=footer . For more options, visit https://groups.google.com/d/optout. -- Eugen Paraschiv Consultant, Baeldung Mobile: +40728896170 Blog: www.baeldung.com Twitter: @baeldung https://twitter.com/baeldung -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CA%2BA8vzHgTeRR4p36fAK5z7A%2BmyVQ6EOhZeju8B4zpPvquD8GJg%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Re: Sustainable way to regularly purge deleted docs
I thought I should revisit this thread in case anyone else is repeating my mistakes, which it turns out are multiple. On the bright side, I do seem to have resolved my issues. tl/dr, optimize was screwing me up, and the merge settings I thought I had in place were not actually there/active. Once applied all is well. First, the regular use of optimize?only_expunge_deletes. I did not realize at first that this command would in fact ignore the max_merged_segment parameter (I thought I had checked it at one point, but I must not have). While max_merged_segment was set to 2 GB, I ended up with segments as large as 17 GB. I reindexed everything one weekend to observe merge behaviour better and clear these out, and it wasn't until those segments were almost completely full of deleted docs that they were merged out (they finally vanished overnight, so I'm not exactly sure what the tipping point was, but I do know they were at around 4/5 deleted at one point). Clearly my use of optimize was putting the system in a state that only additional optimize calls could clean, making the cluster addicted to the optimize call. Second, and this is the more embarrassing thing, my changed merge settings had mostly not taken effect (or were reverted at some point). After removing all of the large segments via a full reindex, I added nodes to get the system to a stable point where normal merging would keep the deleted docs in check. It ended up taking 5/6 nodes to maintain ~30% delete equilibrium and enough memory to operate, which was 2-3 more nodes that I really wanted to dedicate. I decided then to bump the max_merged_segment up as per Nikolas's recommendation above (just returning it to the default 5 GB to start with), but noticed that the index merge settings were not what I thought they were. Sometime, probably months ago when I was trying to tune things originally, I apparently made a mistake, though I'm still not exactly sure when/where. I had the settings defined in the elasticsearch.yml file, but I'm guessing those are only applied to new indices when they're created, not existing indices that already have their configuration set? I know I had updated some settings via the API at some point, but perhaps I had reverted them, or simply not applied them to the index in question. Regardless, the offending index still had mostly default settings, only the max_merged_segment being different at 2 GB. I applied the settings above (plus the 5 GB max_merged_segment value) to the cluster and then performed a rolling restart to let the settings take effect. As each node came up, the deleted docs were quickly merged out of existence and the node stabilized ~3% deleted. CPU spiked to 100% while this took place, disk didn't seem to be too stressed (it reported 25% utilization when I checked via iostat at one point), but once the initial clean-up was done things settled down, and I'm expecting smaller spikes as it maintains the lower deleted percentage (I may even back down the reclaim_deletes_weight). I need to see how it actually behaves during normal load during the week before deciding everything is completely resolved, but so far things look good, and I've been able to back down to only 3 nodes again. So, I've probably wasted dozens of hours a hundreds of dollars of server time resolving what was ultimately a self-inflicted problem that should have been fixed easily months ago. So it goes. On Thursday, December 4, 2014 11:54:07 AM UTC-5, Jonathan Foy wrote: Hello I do agree with both of you that my use of optimize as regular maintenance isn't the correct way to do things, but it's been the only thing that I've found that keeps the deleted doc count/memory under control. I very much want to find something that works to avoid it. I came to much the same conclusions that you did regarding the merge settings and logic. It took a while (and eventually just reading the code) to find out that though dynamic, the merge settings don't actually take effect until a shard is moved/created (fixed in 1.4), so a lot of my early work thinking I'd changed settings wasn't really valid. That said, my merge settings are still largely what I have listed earlier in the thread, though repeating them for convenience: indices.store.throttle.type: none index.merge.policy.reclaim_deletes_weight: 6.0 -- This one I know is quite high, I kept bumping it up before I realized the changes weren't taking effect immediately index.merge.policy.max_merge_at_once: 5 index.merge.policy.max_merge_at_once_explicit: 5 index.merge.policy.segments_per_tier: 5 index.merge.policy.max_merged_segment: 2gb I DO have a mess of nested documents in the type that I know is the most troublesome...perhaps the merge logic doesn't take deleted nested documents into account when deciding what segment to merge? Or perhaps since I have a small max_merged_segment, it's like
Dynamically appending a query (for data entitlements)
I have a use case where for every query that is coming from the user to elasticsearch (ES), I want to add another query on ES server side before ES executes the query. The reason I need to dynamically add this other query is for enforcing data-level entitlements. e.g. Let's say that I am storing Orders in one of my ES indexes. Each Order has a vendorid associated with it. When a user of my app submits a query for Orders, I want to make sure that only those Orders are returned by ES search that belong to the vendorid of this user e.g. the user may have submitted a query to show all orders where order value = $100. I want to append another query to this saying that only the Orders that are associated with the vendor id of this user should be returned. How can I achieve this? In the servlet world we have the mechanism of FILTERS. Is something similar available in ES? Thanks Lokesh -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a308903e-0653-4de6-a2f8-1747c94b006b%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: Dynamically appending a query (for data entitlements)
Title: Re: Dynamically appending a query (for data entitlements) Hello! Are you allowing your users to directly talk to Elasticsearch? If so apart from modifying Elasticsearch (either the base code itself, or through dedicated plugin) you can't achieve what you want. You could use aliases (http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-aliases.html ) and define an alias per vendor that would restrict the data returned. However if users are allowed to directly talk to Elasticsearch there is a high risk that one would just omit the alias and go directly to the indices. On the other hand you probably have some application in front of Elasticsearch and this is a perfect place to take the query from the user and modify it to include additional filter. -- Regards, Rafa Ku Performance Monitoring * Log Analytics * Search Analytics Solr Elasticsearch Support * http://sematext.com/ I have a use case where for every query that is coming from the user to elasticsearch (ES), I want to add another query on ES server side before ES executes the query. The reason I need to dynamically add this other query is for enforcing data-level entitlements. e.g. Let's say that I am storing Orders in one of my ES indexes. Each Order has a vendorid associated with it. When a user of my app submits a query for Orders, I want to make sure that only those Orders are returned by ES search that belong to the vendorid of this user e.g. the user may have submitted a query to show all orders where order value = $100. I want to append another query to this saying that only the Orders that are associated with the vendor id of this user should be returned. How can I achieve this? In the servlet world we have the mechanism of FILTERS. Is something similar available in ES? Thanks Lokesh -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a308903e-0653-4de6-a2f8-1747c94b006b%40googlegroups.com. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1802889792.20141220205101%40alud.com.pl. For more options, visit https://groups.google.com/d/optout.
Re: Sum of total term frequency in ONE document
Hi , Why dont you use script fields to access this value - http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-script-fields.html#search-request-script-fields Thanks Vineeth On Wed, Dec 17, 2014 at 2:57 PM, valerij.vasilce...@googlemail.com wrote: I need sumttf of ONE document in a field. However I can get sumttf of all documents only... I need to be able to access the variable in script like _index['field'].sumttf() of that particular document. This is what I've got so far. Mapping: {document2 : { mappings : { document2 : { _all : { enabled : false }, properties : { content : { type : string, term_vector : yes, fields : { with_shingles : { type : string, analyzer : my_shingle_analyzer } } }, ... Term vector: _index : document2, _type : document2, _id : 709718, _version : 1, term_vectors : { content : { field_statistics : { sum_doc_freq : 60676474, doc_count : 198373, sum_ttf : 224960172 }, terms : { 0 : { term_freq : 8 }, 0.5 : { term_freq : 1 }, 003a0e45ea07a : { term_freq : 1 }, 005 : { term_freq : 1 }, 0081989 : { term_freq : 1 }, 01 : { term_freq : 1 }, 01.08.2002 : { term_freq : 1 }, ... -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b9afb2c4-bde4-4c60-b026-aa2c24227c03%40googlegroups.com https://groups.google.com/d/msgid/elasticsearch/b9afb2c4-bde4-4c60-b026-aa2c24227c03%40googlegroups.com?utm_medium=emailutm_source=footer . For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGdPd5nx2PZbAp%2BL732sShWY82Bhthurs8nPL_Ck3gG13uPhZQ%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Re: FiltrES - A language that compiles to ElasticSearch Query DSL
Hi , How different is it from query_string query langauge. Or is it using query_string in the background. Thanks Vineeth On Wed, Dec 17, 2014 at 5:23 AM, Abe Haskins abeisgr...@abeisgreat.com wrote: Hi folks! I wanted to share FiltrES.js https://github.com/abeisgreat/FiltrES.js, a tool for compiling simple human readable expressions (i.e. '(height = 73 or (favorites.color == green and height != 73)) and firstname ~= o.+ )') into ES queries. This is useful for times when you want end users (or developers who aren't ES experts) to be able to query based on arbitrary filters. It doesn't use script filters, so it's safe and easy to use. I'd love to get any thoughts/feedback as I am *not *an ES expert and FiltrES was written so I could use it, but I'm happy to expand it for more complex/interesting use cases. Best, Abe -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/27d4008e-b2f3-4410-bc78-7343b5bbf8d7%40googlegroups.com https://groups.google.com/d/msgid/elasticsearch/27d4008e-b2f3-4410-bc78-7343b5bbf8d7%40googlegroups.com?utm_medium=emailutm_source=footer . For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3D6iKTOaqGOSff94eanC0JqXVW0fEjE-9Ph04HDDpcx%2BQ%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Integration tests with gradle
The docs http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/using-elasticsearch-test-classes.html for unit testing have you configuring your pom in a certain way. Apparently the order of these dependencies matter. When attempting to use the ElasticsearchIntegrationTest class from a project managed by gradle, the dependencies section would look something like this: dependencies { testCompile 'org.hamcrest:hamcrest-core:1.3' testCompile 'junit:junit:4.12' testCompile 'org.elasticsearch:elasticsearch:1.3.4:tests' testCompile 'org.apache.lucene:lucene-test-framework:4.9.1' testCompile 'com.carrotsearch.randomizedtesting:randomizedtesting-runner:2.1.11' compile 'org.elasticsearch:elasticsearch:1.3.4' } Unfortunately, classpath order is not guaranteed in this world, and so a simple test with ElasticsearchIntegrationTest fails with: java.lang.AssertionError: fix your classpath to have tests-framework.jar before lucene-core.jar __randomizedtesting.SeedInfo.seed([20D50B2CB59AFD95]:0) org.apache.lucene.util.TestRuleSetupAndRestoreClassEnv.before(TestRuleSetupAndRestoreClassEnv.java:177) org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) [...com.carrotsearch.randomizedtesting.*] org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) [...com.carrotsearch.randomizedtesting.*] java.lang.Thread.run(Thread.java:745) Do we have a documented story for how to test elasticsearch code with Gradle? Jon -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/57716599-1b93-4c7a-b254-bc1789703453%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[ANN] Elasticsearch Ratpack plugin 1.4.0.0
Hi, I have written a small plugin in Groovy to embed my future Ratpack http://ratpack.io applications into Elasticsearch, More info at https://github.com/jprante/elasticsearch-plugin-ratpack Have fun, Jörg -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoF5jNNCPczs46cRb7OJ8odOQ0GCbSuzvtZdmbJYqyFFVQ%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Kibana 4 BETA 2
Does anyone have some instructions on how to use Tile Maps for plotting Twitter Data in Kibana 4 BETA 2? I can't seem to get it to work and cannot find anything to reference -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/54d08e44-47ca-4a25-a6ae-594220eab0d0%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: Dynamically appending a query (for data entitlements)
I am allowing users to talk to Elasticsearch (ES) through Kibana. As of now I am not planning to write my own user interface on top of ES. But even with an app on top of ES, I would like the data entitlements checks to happen on the ES server side to ensure that no matter where the query comes from the server is ensuring that only entitled data is returned. Aliasing won't work as a solution for our use case. Let me check the plugins route. Are there any good references on the web that provide a tutorial on how to write ES plugins? Thanks Lokesh On Sunday, December 21, 2014 1:22:19 AM UTC+5:30, Rafał Kuć wrote: Hello! Are you allowing your users to directly talk to Elasticsearch? If so apart from modifying Elasticsearch (either the base code itself, or through dedicated plugin) you can't achieve what you want. You could use aliases ( http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-aliases.html ) and define an alias per vendor that would restrict the data returned. However if users are allowed to directly talk to Elasticsearch there is a high risk that one would just omit the alias and go directly to the indices. On the other hand you probably have some application in front of Elasticsearch and this is a perfect place to take the query from the user and modify it to include additional filter. *-- Regards, Rafał Kuć Performance Monitoring * Log Analytics * Search Analytics Solr Elasticsearch Support * *http://sematext.com/ I have a use case where for every query that is coming from the user to elasticsearch (ES), I want to add another query on ES server side before ES executes the query. The reason I need to dynamically add this other query is for enforcing data-level entitlements. e.g. Let's say that I am storing Orders in one of my ES indexes. Each Order has a vendorid associated with it. When a user of my app submits a query for Orders, I want to make sure that only those Orders are returned by ES search that belong to the vendorid of this user e.g. the user may have submitted a query to show all orders where order value = $100. I want to append another query to this saying that only the Orders that are associated with the vendor id of this user should be returned. How can I achieve this? In the servlet world we have the mechanism of FILTERS. Is something similar available in ES? Thanks Lokesh -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com javascript:. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a308903e-0653-4de6-a2f8-1747c94b006b%40googlegroups.com https://groups.google.com/d/msgid/elasticsearch/a308903e-0653-4de6-a2f8-1747c94b006b%40googlegroups.com?utm_medium=emailutm_source=footer . For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/082e9dc1-3c7c-4947-895f-cdac9b3a4425%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.