Hello all, I can't remove duplicate,I'm using ferret
to index log file in order to monitor application activity,
what I want to do is index data based on the uniqueness of
[filename,line](actullay should be [host,filename,line],
the code is following:

if !$indexer

      field_infos = Ferret::Index::FieldInfos.new(:index =>
:untokenized_omit_norms,
                                   :term_vector => :no)
      field_infos.add_field(:content, :store => :yes, :index => :yes)

      $indexer = Ferret::I.new(:path => index_dir,
                               :field_infos => field_infos,
                               :key => [:filename, :line],
                               :max_buffered_docs=>100)

      #$indexer ||= Ferret::I.new(:path=>index_dir, :key => ['filename',
'line'], :max_buffered_docs=>100) #unique host,file_name,line
      #$indexer.field_infos.add_field(:time,
      #                               #:default_boost => 20,
      #                               :store => :yes,
      #                               :index => :untokenized,
      #                               :term_vector => :no)
    end
but the problem is, I will index a new datum even if the [filename,line]
is same, even I change :key => ["filename", "line"], it also doesn't
work,
what's the problem? Thanks.
_______________________________________________
Ferret-talk mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/ferret-talk

Reply via email to