[ 
https://issues.apache.org/jira/browse/HBASE-18886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16181950#comment-16181950
 ] 

Ted Yu commented on HBASE-18886:
--------------------------------

Can you separate commands from command output ?

You can enclose command output with \{code\}

> Backup command onces cancelled not able re-trigger backup again
> ---------------------------------------------------------------
>
>                 Key: HBASE-18886
>                 URL: https://issues.apache.org/jira/browse/HBASE-18886
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Vishal Khandelwal
>
> *Repair*
> Please make sure that backup is enabled on the cluster. To enable backup, in 
> hbase-site.xml, set:
>  hbase.backup.enable=true
> hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
> hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
> hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager
> and restart the cluster
> 2017-09-27 09:28:38,119 INFO  [main] metrics.MetricRegistries: Loaded 
> MetricRegistries class 
> org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
> REPAIR status: no failed sessions found. Checking failed delete backup 
> operation ...
> No failed backup DELETE operation found
> 2017-09-27 09:28:38,680 ERROR [main] impl.BackupSystemTable: Snapshot 
> snapshot_backup_system does not exists
> No failed backup MERGE operation found
> 2017-09-27 09:28:38,682 ERROR [main] impl.BackupSystemTable: Snapshot 
> snapshot_backup_system does not exists
> **./bin/hbase backup create full hdfs://localhost:8020/ -t test1**
> Please make sure that backup is enabled on the cluster. To enable backup, in 
> hbase-site.xml, set:
>  hbase.backup.enable=true
> hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
> hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
> hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager
> and restart the cluster
> 2017-09-27 09:29:00,837 INFO  [main] metrics.MetricRegistries: Loaded 
> MetricRegistries class 
> org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
> 2017-09-27 09:29:01,458 ERROR [main] impl.BackupAdminImpl: There is an active 
> session already running
> Backup session finished. Status: FAILURE
> 2017-09-27 09:29:01,460 ERROR [main] backup.BackupDriver: Error running 
> command-line tool
> java.io.IOException: There is an active backup exclusive operation
>       at 
> org.apache.hadoop.hbase.backup.impl.BackupSystemTable.startBackupExclusiveOperation(BackupSystemTable.java:584)
>       at 
> org.apache.hadoop.hbase.backup.impl.BackupManager.startBackupSession(BackupManager.java:373)
>       at 
> org.apache.hadoop.hbase.backup.impl.TableBackupClient.init(TableBackupClient.java:100)
>       at 
> org.apache.hadoop.hbase.backup.impl.TableBackupClient.<init>(TableBackupClient.java:78)
>       at 
> org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.<init>(FullTableBackupClient.java:61)
>       at 
> org.apache.hadoop.hbase.backup.BackupClientFactory.create(BackupClientFactory.java:51)
>       at 
> org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:595)
>       at 
> org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336)
>       at 
> org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137)
>       at 
> org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170)
>       at 
> org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>       at 
> org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178)
> *History*
> lease make sure that backup is enabled on the cluster. To enable backup, in 
> hbase-site.xml, set:
>  hbase.backup.enable=true
> hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
> hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
> hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager
> and restart the cluster
> 2017-09-27 09:30:22,218 INFO  [main] metrics.MetricRegistries: Loaded 
> MetricRegistries class 
> org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
> {ID=backup_1506427470546,Type=FULL,Tables={test2,test1},State=COMPLETE,Start 
> time=Tue Sep 26 17:34:31 IST 2017,End time=Tue Sep 26 17:34:54 IST 
> 2017,Progress=100%}
> {ID=backup_1506427304040,Type=FULL,Tables={test2,test1},State=COMPLETE,Start 
> time=Tue Sep 26 17:31:45 IST 2017,End time=Tue Sep 26 17:32:08 IST 
> 2017,Progress=100%}
> {ID=backup_1506426863567,Type=INCREMENTAL,Tables={test2,test1},State=FAILED,Start
>  time=Tue Sep 26 17:24:24 IST 2017,Failed message=Failed copy from 
> hdfs://localhost:8020/backup/.tmp/backup_1506426863567 to 
> hdfs://localhost:8020/backup/backup_1506426863567/WALs,Progress=0%}
> {ID=backup_1506426677165,Type=INCREMENTAL,Tables={test2,test1},State=FAILED,Start
>  time=Tue Sep 26 17:21:18 IST 2017,Failed message=Failed copy from 
> hdfs://localhost:8020/backup/.tmp/backup_1506426677165 to 
> hdfs://localhost:8020/backup/backup_1506426677165/WALs,Progress=0%}
> {ID=backup_1506426595472,Type=INCREMENTAL,Tables={test2,test1},State=FAILED,Start
>  time=Tue Sep 26 17:19:56 IST 2017,Failed message=Failed copy from 
> hdfs://localhost:8020/backup/.tmp/backup_1506426595472 to 
> hdfs://localhost:8020/backup/backup_1506426595472/WALs,Progress=0%}
> {ID=backup_1506426363747,Type=INCREMENTAL,Tables={test2,test1},State=FAILED,Start
>  time=Tue Sep 26 17:16:05 IST 2017,Failed message=Failed copy from 
> hdfs://localhost:8020/backup/.tmp/backup_1506426363747 to 
> hdfs://localhost:8020/backup/backup_1506426363747/WALs,Progress=0%}
> {ID=backup_1506426242147,Type=INCREMENTAL,Tables={test2,test1},State=FAILED,Start
>  time=Tue Sep 26 17:14:04 IST 2017,Failed message=Failed copy from 
> hdfs://localhost:8020/backup/.tmp/backup_1506426242147 to 
> hdfs://localhost:8020/backup/backup_1506426242147/WALs,Progress=0%}
> {ID=backup_1506425081055,Type=INCREMENTAL,Tables={test2,test1},State=COMPLETE,Start
>  time=Tue Sep 26 16:54:42 IST 2017,End time=Tue Sep 26 16:55:22 IST 
> 2017,Progress=100%}
> {ID=backup_1506424909794,Type=INCREMENTAL,Tables={test2,test1},State=COMPLETE,Start
>  time=Tue Sep 26 16:51:52 IST 2017,End time=Tue Sep 26 16:52:44 IST 
> 2017,Progress=100%}
> {ID=backup_1506424807105,Type=INCREMENTAL,Tables={test2,test1},State=FAILED,Start
>  time=Tue Sep 26 16:50:08 IST 2017,Failed 
> message=java.lang.IllegalArgumentException: Can not create a Path from a null 
> string,Progress=0%}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to