Re: Securing Data in Elasticsearch

2014-06-23 Thread Harvii Dent
Just an update, it should be possible to protect ES from most malicious 
requests that are generated from Kibana's end by only allowing requests as 
described in this nginx config: 
https://github.com/elasticsearch/kibana/blob/master/sample/nginx.conf
I've been looking at the ES API references but I didn't find any other ways 
to do update/delete operations that bypass the above config, although it 
would be great if someone could confirm that.

Thanks


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3eb483d3-ab1a-425c-b629-c490a67c668a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Securing Data in Elasticsearch

2014-06-19 Thread Harvii Dent
@Zennet: I was thinking of doing something similar via a reverse-proxy in 
front of Kibana, however I believe Kibana still uses DELETE, PUT, and POST 
requests to save its dashboards, so I'm not sure what to block exactly.

@Jaguar: jetty plugin looks interesting, especially the 
jetty-restrict-writes.xml part, I'll be taking a look at that.

As Jörg said, it shouldn't be too difficult to create a DELETE request and 
spoof its source to appear as if coming from a trusted source; I just wish 
there was an option built into ES to disable deletes/updates or at least 
authenticate them first.

Thanks everyone

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ec799ca8-a833-49c7-9794-cbb23435e98b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Securing Data in Elasticsearch

2014-06-15 Thread Harvii Dent
Thanks Jörg. Something like OSSEC (www.ossec.net) can provide the 
checksumming you mention below but I guess it would only work on indices 
that have been finalized and marked as read-only (assuming ES does not 
modify the files on disk at this point).

As for active indices, is there anything that can be done on the reverse 
proxy to prevent delete and update operations coming from Kibana's side? I 
believe some requests can be filtered but I'm not sure exactly which 
without affecting the main functionality of Kibana.

Regards

On Saturday, June 14, 2014 12:44:31 AM UTC+3, Jörg Prante wrote:

 You should start HTTP only on localhost then and run Kibana on a selected 
 number of nodes only.

 There are some authentication solutions for Kibana.

 I am not able to find security features like audit trails or preventing 
 writes in Kibana/ES so you have to take care. Assessing Kibana for attacks 
 over the web (intrusion detection, executing commands etc) is useful, I 
 don't know if anyone has tried such a thing, but it is a very complex task.

 Because this variant is tedious and maybe not successful, I would opt for 
 a different approach. Keep a checksummed copy of an index at a safe 
 restricted place on a private ES cluster (or burn it even to optical 
 media) and rsync a copy of it to an unsafe place, to another public ES 
 cluster where Kibana runs. Checksum verification can prove if index is 
 modified in the meantime at the public place.

 Jörg



 On Fri, Jun 13, 2014 at 8:18 PM, Harvii Dent harvi...@gmail.com 
 javascript: wrote:

 ES nodes would be locked down and accessible only to authorized users on 
 the OS level; it's the ability to delete and update indices/documents 
 remotely that's worrisome in this case.
  
 Disabling HTTP REST API completely is not possible since it's required by 
 Kibana (running behind a reverse proxy), although I suppose I could 
 restrict the ES node to only accept traffic from Logstash on port 9300 and 
 from the reverse proxy on port 9200, would this provide sufficient 
 protection? 

 Thanks

 On Thursday, June 12, 2014 6:44:33 PM UTC+3, Jörg Prante wrote:

 If you want ES-level security, you should first reduce attack vectors, 
 by closing down all the open ports and resources that are not necessary.

 One step would be to disable HTTP REST API completely (port 9200) and 
 run Logstash Elasticsearch output only  http://logstash.net/docs/1.4.
 1/outputs/elasticsearch

 As a consequence, you could only kill the ES process on a node, or send 
 Java API commands. It is not possible to block Java API commands over port 
 9300, this is how nodes talk to each other. You could imagine a 
 self-written tool for administering your cluster that uses the Java API 
 only (from a J2EE web app for example)

 On the node on OS level, you would have to protect the OS user of ES 
 node is running under from being accessed by third party users.

 Jörg



 On Thu, Jun 12, 2014 at 5:30 PM, Harvii Dent harvi...@gmail.com wrote:

  ES settings alone would be great, are there other options that I 
 could have missed? right now the main priority is preventing document 
 updates/deletes (and index deletes) via the ES rest api.

 Thanks


 On Thursday, June 12, 2014 6:21:36 PM UTC+3, Jörg Prante wrote:

 There are a lot of methods to tamper with ES files, and physically, 
 everything is possible to modify in files as long as your operating 
 system 
 permits more than something like append-only mode for ES files (not 
 that 
 I know this would work)

 So it depends on your requirements about the security level you want 
 to reach, if ES settings alone can help you or if you need more 
 (paranoid) 
 configurations.

 Jörg
  

 On Thu, Jun 12, 2014 at 4:48 PM, Harvii Dent harvi...@gmail.com 
 wrote:

  Hello,

 I'm planning to use Elasticsearch with Logstash for logs management 
 and search, however, one thing I'm unable to find an answer for is 
 making 
 sure that the data cannot be modified once it reaches Elasticsearch.

 action.destructive_requires_name prevents deleting all indices at 
 once, but they can still be deleted. Are there any options to prevent 
 deleting indices altogether? 

 And on the document level, is it possible to disable 'delete' *AND* 
 'update' operations without setting the entire index as read-only (ie. 
 'index.blocks.read_only')?

 Lastly, does setting 'index.blocks.read_only' ensure that the index 
 files on disk are not changed (so they can be monitored using a file 
 integrity monitoring solution)? as many regulatory and compliance bodies 
 have requirements for ensuring logs integrity.

 Thanks

  -- 
 You received this message because you are subscribed to the Google 
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, 
 send an email to elasticsearc...@googlegroups.com.

 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/dfc73db4-18ac-405e-8929-68be32b01a6c%40goo

Re: Securing Data in Elasticsearch

2014-06-15 Thread Harvii Dent
Interesting, but this assumes that both Logstash and Kibana are on the on 
same host as ES, correct? 

While this is how everything is running in my test environment, I was 
thinking of separating ES from Logstash when going to production since each 
would require significant resources, although I'm not sure how accurate 
that is; would there be any downsides to running Logstash and ES on the 
same node?

Thanks

On Sunday, June 15, 2014 5:40:25 PM UTC+3, Jörg Prante wrote:

 From what I know about Kibana, it just uses the HTTP API _search endpoint, 
 but I have not examined it more thoroughly.

 It is quite simple to set up an nginx/apache reverse proxy to filter 
 requests.

 You should add 

 http:
host: 127.0.0.1

 to your config/elasticsearch.yml to ensure that HTTP REST ES API is not 
 exposed to other hosts, so nginx/apache must take the job to accept Kibana 
 HTTP on port 80 (443) only.

 Jörg


 On Sun, Jun 15, 2014 at 4:31 PM, Harvii Dent harvi...@gmail.com 
 javascript: wrote:

 Thanks Jörg. Something like OSSEC (www.ossec.net) can provide the 
 checksumming you mention below but I guess it would only work on indices 
 that have been finalized and marked as read-only (assuming ES does not 
 modify the files on disk at this point).

 As for active indices, is there anything that can be done on the reverse 
 proxy to prevent delete and update operations coming from Kibana's side? I 
 believe some requests can be filtered but I'm not sure exactly which 
 without affecting the main functionality of Kibana.

 Regards


 On Saturday, June 14, 2014 12:44:31 AM UTC+3, Jörg Prante wrote:

 You should start HTTP only on localhost then and run Kibana on a 
 selected number of nodes only.

 There are some authentication solutions for Kibana.

 I am not able to find security features like audit trails or preventing 
 writes in Kibana/ES so you have to take care. Assessing Kibana for attacks 
 over the web (intrusion detection, executing commands etc) is useful, I 
 don't know if anyone has tried such a thing, but it is a very complex task.

 Because this variant is tedious and maybe not successful, I would opt 
 for a different approach. Keep a checksummed copy of an index at a safe 
 restricted place on a private ES cluster (or burn it even to optical 
 media) and rsync a copy of it to an unsafe place, to another public ES 
 cluster where Kibana runs. Checksum verification can prove if index is 
 modified in the meantime at the public place.

 Jörg



 On Fri, Jun 13, 2014 at 8:18 PM, Harvii Dent harvi...@gmail.com wrote:

 ES nodes would be locked down and accessible only to authorized users 
 on the OS level; it's the ability to delete and update indices/documents 
 remotely that's worrisome in this case.
  
 Disabling HTTP REST API completely is not possible since it's required 
 by Kibana (running behind a reverse proxy), although I suppose I could 
 restrict the ES node to only accept traffic from Logstash on port 9300 and 
 from the reverse proxy on port 9200, would this provide sufficient 
 protection? 

 Thanks

 On Thursday, June 12, 2014 6:44:33 PM UTC+3, Jörg Prante wrote:

 If you want ES-level security, you should first reduce attack vectors, 
 by closing down all the open ports and resources that are not necessary.

 One step would be to disable HTTP REST API completely (port 9200) and 
 run Logstash Elasticsearch output only  http://logstash.net/docs/1.4.
 1/outputs/elasticsearch

 As a consequence, you could only kill the ES process on a node, or 
 send Java API commands. It is not possible to block Java API commands 
 over 
 port 9300, this is how nodes talk to each other. You could imagine a 
 self-written tool for administering your cluster that uses the Java API 
 only (from a J2EE web app for example)

 On the node on OS level, you would have to protect the OS user of ES 
 node is running under from being accessed by third party users.

 Jörg



 On Thu, Jun 12, 2014 at 5:30 PM, Harvii Dent harvi...@gmail.com 
 wrote:

  ES settings alone would be great, are there other options that I 
 could have missed? right now the main priority is preventing document 
 updates/deletes (and index deletes) via the ES rest api.

 Thanks


 On Thursday, June 12, 2014 6:21:36 PM UTC+3, Jörg Prante wrote:

 There are a lot of methods to tamper with ES files, and physically, 
 everything is possible to modify in files as long as your operating 
 system 
 permits more than something like append-only mode for ES files (not 
 that 
 I know this would work)

 So it depends on your requirements about the security level you want 
 to reach, if ES settings alone can help you or if you need more 
 (paranoid) 
 configurations.

 Jörg
  

 On Thu, Jun 12, 2014 at 4:48 PM, Harvii Dent harvi...@gmail.com 
 wrote:

  Hello,

 I'm planning to use Elasticsearch with Logstash for logs management 
 and search, however, one thing I'm unable to find an answer for is 
 making 
 sure that the data cannot be modified once

Re: Securing Data in Elasticsearch

2014-06-13 Thread Harvii Dent
ES nodes would be locked down and accessible only to authorized users on 
the OS level; it's the ability to delete and update indices/documents 
remotely that's worrisome in this case.
 
Disabling HTTP REST API completely is not possible since it's required by 
Kibana (running behind a reverse proxy), although I suppose I could 
restrict the ES node to only accept traffic from Logstash on port 9300 and 
from the reverse proxy on port 9200, would this provide sufficient 
protection? 

Thanks

On Thursday, June 12, 2014 6:44:33 PM UTC+3, Jörg Prante wrote:

 If you want ES-level security, you should first reduce attack vectors, by 
 closing down all the open ports and resources that are not necessary.

 One step would be to disable HTTP REST API completely (port 9200) and run 
 Logstash Elasticsearch output only  
 http://logstash.net/docs/1.4.1/outputs/elasticsearch

 As a consequence, you could only kill the ES process on a node, or send 
 Java API commands. It is not possible to block Java API commands over port 
 9300, this is how nodes talk to each other. You could imagine a 
 self-written tool for administering your cluster that uses the Java API 
 only (from a J2EE web app for example)

 On the node on OS level, you would have to protect the OS user of ES node 
 is running under from being accessed by third party users.

 Jörg



 On Thu, Jun 12, 2014 at 5:30 PM, Harvii Dent harvi...@gmail.com 
 javascript: wrote:

 ES settings alone would be great, are there other options that I could 
 have missed? right now the main priority is preventing document 
 updates/deletes (and index deletes) via the ES rest api.

 Thanks


 On Thursday, June 12, 2014 6:21:36 PM UTC+3, Jörg Prante wrote:

 There are a lot of methods to tamper with ES files, and physically, 
 everything is possible to modify in files as long as your operating system 
 permits more than something like append-only mode for ES files (not that 
 I know this would work)

 So it depends on your requirements about the security level you want to 
 reach, if ES settings alone can help you or if you need more (paranoid) 
 configurations.

 Jörg
  

 On Thu, Jun 12, 2014 at 4:48 PM, Harvii Dent harvi...@gmail.com wrote:

  Hello,

 I'm planning to use Elasticsearch with Logstash for logs management and 
 search, however, one thing I'm unable to find an answer for is making sure 
 that the data cannot be modified once it reaches Elasticsearch.

 action.destructive_requires_name prevents deleting all indices at 
 once, but they can still be deleted. Are there any options to prevent 
 deleting indices altogether? 

 And on the document level, is it possible to disable 'delete' *AND* 
 'update' operations without setting the entire index as read-only (ie. 
 'index.blocks.read_only')?

 Lastly, does setting 'index.blocks.read_only' ensure that the index 
 files on disk are not changed (so they can be monitored using a file 
 integrity monitoring solution)? as many regulatory and compliance bodies 
 have requirements for ensuring logs integrity.

 Thanks

  -- 
 You received this message because you are subscribed to the Google 
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.

 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/dfc73db4-18ac-405e-8929-68be32b01a6c%
 40googlegroups.com 
 https://groups.google.com/d/msgid/elasticsearch/dfc73db4-18ac-405e-8929-68be32b01a6c%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/190a707b-9edf-4128-9740-79d59f0bc209%40googlegroups.com
  
 https://groups.google.com/d/msgid/elasticsearch/190a707b-9edf-4128-9740-79d59f0bc209%40googlegroups.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9339cfd0-9300-496e-bc00-4179725e02db%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Securing Data in Elasticsearch

2014-06-12 Thread Harvii Dent
Hello,

I'm planning to use Elasticsearch with Logstash for logs management and 
search, however, one thing I'm unable to find an answer for is making sure 
that the data cannot be modified once it reaches Elasticsearch.

action.destructive_requires_name prevents deleting all indices at once, 
but they can still be deleted. Are there any options to prevent deleting 
indices altogether? 

And on the document level, is it possible to disable 'delete' *AND* 
'update' operations without setting the entire index as read-only (ie. 
'index.blocks.read_only')?

Lastly, does setting 'index.blocks.read_only' ensure that the index files 
on disk are not changed (so they can be monitored using a file integrity 
monitoring solution)? as many regulatory and compliance bodies have 
requirements for ensuring logs integrity.

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/dfc73db4-18ac-405e-8929-68be32b01a6c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Securing Data in Elasticsearch

2014-06-12 Thread Harvii Dent
ES settings alone would be great, are there other options that I could have 
missed? right now the main priority is preventing document updates/deletes 
(and index deletes) via the ES rest api.

Thanks

On Thursday, June 12, 2014 6:21:36 PM UTC+3, Jörg Prante wrote:

 There are a lot of methods to tamper with ES files, and physically, 
 everything is possible to modify in files as long as your operating system 
 permits more than something like append-only mode for ES files (not that 
 I know this would work)

 So it depends on your requirements about the security level you want to 
 reach, if ES settings alone can help you or if you need more (paranoid) 
 configurations.

 Jörg


 On Thu, Jun 12, 2014 at 4:48 PM, Harvii Dent harvi...@gmail.com 
 javascript: wrote:

 Hello,

 I'm planning to use Elasticsearch with Logstash for logs management and 
 search, however, one thing I'm unable to find an answer for is making sure 
 that the data cannot be modified once it reaches Elasticsearch.

 action.destructive_requires_name prevents deleting all indices at once, 
 but they can still be deleted. Are there any options to prevent deleting 
 indices altogether? 

 And on the document level, is it possible to disable 'delete' *AND* 
 'update' operations without setting the entire index as read-only (ie. 
 'index.blocks.read_only')?

 Lastly, does setting 'index.blocks.read_only' ensure that the index files 
 on disk are not changed (so they can be monitored using a file integrity 
 monitoring solution)? as many regulatory and compliance bodies have 
 requirements for ensuring logs integrity.

 Thanks

  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/dfc73db4-18ac-405e-8929-68be32b01a6c%40googlegroups.com
  
 https://groups.google.com/d/msgid/elasticsearch/dfc73db4-18ac-405e-8929-68be32b01a6c%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/190a707b-9edf-4128-9740-79d59f0bc209%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.