I would first try without any script. If it still does not work, you should may be open an issue in the project.
-- David ;-) Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs > Le 30 déc. 2014 à 11:00, Vinay H M <vi...@paqs.biz> a écrit : > > ya shell script can be used ... but this error is occurring when i run the > elasticsearch... can i get the one sample code to extract the csv file and > to execute in elasticsearch and kibana > >> On Tuesday, December 30, 2014 1:25:03 PM UTC+5:30, David Pilato wrote: >> I don't know this plugin but are you sure you can provide a shell script? >> Sounds like Groovy is trying to execute it... >> >> -- >> David ;-) >> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs >> >>> Le 30 déc. 2014 à 04:57, Vinay H M <vi...@paqs.biz> a écrit : >>> >>> >>> >>>> On Tuesday, December 30, 2014 9:23:58 AM UTC+5:30, Vinay H M wrote: >>>> Hi All >>>> >>>> I found the error while running the elasticsearch ..plzz some one solve it >>>> >>>> >>>> [2014-12-30 >>>> 09:16:22,389][ERROR][org.agileworks.elasticsearch.river.csv.CSVRiver] >>>> [Aliyah Bishop] [csv][my_csv_river] Error has occured during processing >>>> file 'PDUserDeviceDataTable.csv.processing' , skipping line: >>>> '[9999249573";"875";"testaasim";"00:12:F3:1B:A5:68";"2";"1344";"0";"29.7";"58.3";"1419835852";"20.0";"30.0";"40.0";"50.0";"500";"500";"12.9226205";"77.5605173]' >>>> and continue in processing >>>> java.lang.ArrayIndexOutOfBoundsException: 1 >>>> at >>>> org.codehaus.groovy.runtime.BytecodeInterface8.objectArrayGet(BytecodeInterface8.java:360) >>>> at >>>> org.agileworks.elasticsearch.river.csv.OpenCSVFileProcessor.processDataLine(OpenCSVFileProcessor.groovy:72) >>>> at >>>> org.agileworks.elasticsearch.river.csv.OpenCSVFileProcessor.this$2$processDataLine(OpenCSVFileProcessor.groovy) >>>> at >>>> org.agileworks.elasticsearch.river.csv.OpenCSVFileProcessor$this$2$processDataLine.callCurrent(Unknown >>>> Source) >>>> at >>>> org.agileworks.elasticsearch.river.csv.OpenCSVFileProcessor.process(OpenCSVFileProcessor.groovy:49) >>>> at >>>> org.agileworks.elasticsearch.river.csv.CSVConnector.processAllFiles(CSVConnector.groovy:47) >>>> at >>>> org.agileworks.elasticsearch.river.csv.CSVConnector.run(CSVConnector.groovy:20) >>>> at java.lang.Thread.run(Thread.java:745) >>> >>> >>> the command i am using to create index >>> >>> curl -XPUT localhost:9200/_river/my_csv_river/_meta -d ' >>> { >>> "type" : "csv", >>> "csv_file" : { >>> "folder" : "/home/paqs/Downloads/kibana/dec", >>> "filename_pattern" : ".*\\.csv$", >>> "poll":"1m", >>> "fields" : [ >>> "Sno", >>> "userld", >>> "userName", >>> "deviceld", >>> "deviceCurrentMode", >>> "co2Level", >>> "dustLevel", >>> "temperature", >>> "relativeHumidity", >>> "timeStamp", >>> "tempLow", >>> "tempHigh", >>> "rhLow", >>> "rhHigh", >>> "dust", >>> "pollution", >>> "latitude", >>> "longitude" >>> ], >>> "first_line_is_header" : "false", >>> "field_separator" : ",", >>> "escape_character" : "\\", >>> "quote_character" : "\"", >>> "field_id" : "id", >>> "field_timestamp" : "imported_at", >>> "concurrent_requests" : "1", >>> "charset" : "UTF-8", >>> "script_before_file": >>> "/home/paqs/Downloads/kibana/dec/before_file.sh", >>> "script_after_file": >>> "/home/paqs/Downloads/kibana/dec/after_file.sh", >>> "script_before_all": >>> "/home/paqs/Downloads/kibana/dec/before_all.sh", >>> "script_after_all": "/home/paqs/Downloads/kibana/dec/after_all.sh" >>> }, >>> "index" : { >>> "index" : "decdevicedata", >>> "type" : "alert", >>> "bulk_size" : 1000, >>> "bulk_threshold" : 10 >>> } >>> }' >>> >>> >>> the curl command i am using to create the mapping >>> >>> Create a mapping >>> # >>> curl -XPUT http://localhost:9200/decdevicedata -d ' >>> { >>> "settings" : { >>> "number_of_shards" : 1 >>> }, >>> "mappings" : { >>> "alert" : { >>> "properties" : { >>> "Sno": {"type" : "integer"}, >>> "co2Level" : {"type" : "integer"}, >>> "deviceCurrentMode" : {"type" : "integer"}, >>> "deviceld" : {"type" : "string"}, >>> "dust" : {"type" : "integer"}, >>> "dustLevel" : {"type" : "integer"}, >>> "latitude": {"type" : "integer"}, >>> "longitude": {"type" : "integer"}, >>> "pollution" : {"type" : "integer"}, >>> "relativeHumidity" : {"type" : "float"}, >>> "rhLow": {"type" : "float"}, >>> "rhHigh": {"type" : "float"}, >>> "temperature": {"type" : "float"}, >>> "tempLow": {"type" : "float"}, >>> "tempHigh": {"type" : "float"}, >>> "timeStamp" : {"type" : "date", "ignore_malformed" : true, >>> "format" : "dateOptionalTime"}, >>> "userld" : {"type" : "integer"}, >>> "userName" : {"type" : "string", "index" : "not_analyzed"} >>> >>> } >>> } >>> } >>> }' >>> >>> >>> >>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "elasticsearch" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to elasticsearc...@googlegroups.com. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/elasticsearch/2bb5ffb1-3213-4055-8f58-467721e1ed5d%40googlegroups.com. >>> For more options, visit https://groups.google.com/d/optout. > > -- > You received this message because you are subscribed to the Google Groups > "elasticsearch" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to elasticsearch+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/elasticsearch/966dee53-7347-43ef-aba7-e86467d60124%40googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7B1DFA6C-D5E5-4C7F-97E8-49F6EB5E9730%40pilato.fr. For more options, visit https://groups.google.com/d/optout.