[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16853404#comment-16853404 ]
Varun Thacker commented on SOLR-9952: ------------------------------------- {quote}Ok let me take my statement back. The problem is ..... {quote} I missed this statement so I think I understand how running HDFS and backing up to S3 will work What do you think the scope of this Jira should be ? Closed because the functionality exists or should we still make an S3 backup repository which takes the fs.s3a params and instead of using HdfsBackupRepository and specifying ( Dsolr.hdfs.confdir=/etc/hadoop/conf ) and a core-site.xml where fs.s3a params are specified > S3BackupRepository > ------------------ > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: Backup/Restore > Reporter: Mikhail Khludnev > Priority: Major > Attachments: > 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, > 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr > on S3.pdf, core-site.xml.template > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org