[ 
https://issues.apache.org/jira/browse/SOLR-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17463474#comment-17463474
 ] 

Gus Heck commented on SOLR-15864:
---------------------------------

Some perspective (of which you may be aware)

The first, most important thing with solr is to understand that it shouldn't be 
the system of record. It's an index you can use to find a reference to the 
document in the system of record. You can store additional fields/metadata, and 
often systems do get built such that they rely on this additional field data 
which is fine, but we want to be very careful not to encourage usages where the 
index cannot be created from source documents. Solr is not a database. 
Re-indexing to take advantage of new features and improvements in lucene, and 
for upgrades to more than one major version later than the original indexing 
should be expected.

Additionally mature deployments of mission critical systems often have two or 
more clusters in separate data centers because as we all saw recently, even AWS 
data centers are not necessarily reliable 100%. 

That said, there probably is a place for something like this when indexes grow 
to a size that is impractical or prohibitively expensive to re-index, and for 
situations where alternate data center redundancy has not been realized or is 
infeasible for budget or internal political reasons. Can you elaborate in some 
more detail as to what's not working with the way you are doing it now? I'm not 
quite sure I understand your description.

> Add option for Immutable backups to S3 for Ransonware and Deleteware 
> mitigation
> -------------------------------------------------------------------------------
>
>                 Key: SOLR-15864
>                 URL: https://issues.apache.org/jira/browse/SOLR-15864
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Michael Joyner
>            Priority: Major
>
> It would be an extremely useful feature to add to the S3 backup repository 
> (and possibly others, if supported) an option to be able to mark all uploaded 
> objects as immutable for a defined period of time.
> If an file in the current backup already exists in the repository, simply 
> extend its immutable until time.
> While I'm thinking of basic Ransomware and Deleteware mitigation, this also 
> could be used for Compliance mode.
> Currently I'm backing up to a bucket with automatic locking, but this doesn't 
> handle the situation where an already existing uploaded index file immutable 
> until time ends earlier - leaving a timestamp gap and eventual immutable 
> state no longer being active on some index files as compared to others for a 
> particular backup opening up an avenue for attack.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org

Reply via email to