Off the top of my head: - Elasticsearch: I think this only needs to be done on the master.
- Hadoop: Doing this on 1 node should be sufficient iirc. - Re the events, are you sure that either Storm indexing or enrichment topology hasn't crashed and it's trying to (re-)process the events, crashing again, reprocessing, etc...? You can see whether errors are happening in the Storm UI (Ambari -> Storm -> Quick Links -> Storm UI) At least that's what happened at some point on my cluster. Fixing this involves stopping Kafka & Storm, and clearing the indexing & enrichment directories from Kafka. On 2017-09-08 08:13, Frank Horsfall wrote: > Hi Laurens, > > Does this exercise have to be executed on all 3 nodes? > > You also mention that if the queues are empty this should work. Is there a > way of clearing the queues? > > The reason I ask is that > > A few days ago I shut down yaf, bro, snort, etc. but I'm still processing > millions of events which I suspect is the backlog of events that have been > queued for processing. > > Kindest > > Frank > > FROM: Laurens Vets [mailto:laur...@daemon.be] > SENT: Wednesday, September 6, 2017 6:17 PM > TO: user@metron.apache.org > CC: Frank Horsfall <frankhorsf...@cunet.carleton.ca> > SUBJECT: Re: Clearing of data to start over > > Hi Frank, > > If you all your queues (Kafka/Storm) are empty, the following should work: > > - Deleting your elasticsearch indices: curl -X DELETE > 'http://localhost:9200/snort_index_*', curl -X DELETE > 'http://localhost:9200/yaf_index_*', etc... > > - Deleting your Hadoop data: > > Become the hdfs user: sudo su - hdfs > Show what's been indexed in Hadoop: hdfs dfs -ls > /apps/metron/indexing/indexed/ > Output should show the following probably: > /apps/metron/indexing/indexed/error > /apps/metron/indexing/indexed/snort > /apps/metron/indexing/indexed/yaf > ... > > You can remove these with: > hdfs dfs -rmr -skipTrash /apps/metron/indexing/indexed/error/ > hdfs dfs -rmr -skipTrash /apps/metron/indexing/indexed/snort/ > > Or the individial files with > > hdfs dfs -rmr -skipTrash /apps/metron/indexing/indexed/error/FILENAME > > On 2017-09-06 13:59, Frank Horsfall wrote: > >> Hello all, >> >> I have installed a 3 node system using the bare metal Centos 7 guideline. >> >> https://cwiki.apache.org/confluence/display/METRON/Metron+0.4.0+with+HDP+2.5+bare-metal+install+on+Centos+7+with+MariaDB+for+Metron+REST >> >> >> It has taken me a while to have all components working properly and I left >> the yaf,bro,snort apps running so quite a lot of data has been generated. >> Currently, I have almost 18 million events identified in Kibana. 16+ million >> are yaf based, and 2+ million are snort .... 190 events are my new squid >> telemetry, J. It looks like it still has a while to go before it catches >> up to current day. I recently shutdown the apps. >> >> My questions are: >> >> 1. Is there a way to wipe all my data and indices clean so that I may >> now begin with a fresh dataset? >> >> 2. Is there a way to configure yaf so that its data is meaningful ? It >> is currently creating what looks to be test data? >> >> 3. I have commented out the test snort rule but it is still >> generating the odd record which looks once again looks like test data. Can >> this be stopped as well? >> >> Kindest regards, >> >> Frank