Hi,

I am facing the problem when i am enabling the HA in Accumulo OR Migrating Accumulo from non-HA Namenode to HA Namenode.

I am getting eror (bad credential problem).

Please advice.

Regards

Parmesh

On 04/10/19 3:26 AM, Mike Miller wrote:
Some tips from Ed:
The key is what shows up in the metadata - if you look at the System Metadata 
tables section in the Accumulo user’s manual, the files entries have a form of 
row_id, filename, file_size, row_count.

The monitor is reflecting what the the metadata table is showing.  In the case 
of a bulk import, there will not be a row count estimate, so it will not show 
in the monitor.

If you entered a row via the shell and then did a flush (or a flush occurred 
for any reason) that will create a file entry that includes the row estimate, 
soi it will show up in the monitor.

If you perform a full compaction (compact -w -t  <TABLENAME>), when it 
finishes, all of the rows will have been processed and written to “new” files, and 
those file entries will contain the row count values and should show up in the 
monitor.

Overall, the value shown in the monitor is a estimate and may or may not 
reflect the number of rows available in a table via a scan (things like TTL, 
bulk imports, values written to memory but not flushed to a file)… Right after 
a compaction, the value should be close, but again, just an estimate.

On 2019/10/03 17:17:52, marathiboy <[email protected]> wrote:
Thanks Mike,


  If my bulk inserted data with visibility labels is not showing up (but a
ne=
  wly inserted row from the shell does show up on monitor). All the data
show=
  s up correctly under hdfs  /accumulo/tablesxxxxx rfiles & rfile-info
--dump=
   shows everything nicely.

  This is happening consistently, so trying understand how it works and how
c=
  an I diagnose this?


  Thanks in advance



--
Sent from: http://apache-accumulo.1065345.n5.nabble.com/Users-f2.html

Reply via email to