[Archivesspace_Users_Group] Survey of ArchivesSpace documentation writers

2019-05-21 Thread Kevin Clair
Hello!

The Documentation sub-team of the ArchivesSpace User Advisory Council is 
interested in the reasons why individual ArchivesSpace users are electing to 
write their own local documentation, either instead of or in addition to the 
local documentation available at https://docs.archivesspace.org. We have 
developed a brief survey of the ArchivesSpace membership in order to determine 
reasons for writing local documentation with the hope of improving the local 
documentation and making it easier to supplement that documentation with local 
practices.

The survey may be found at https://forms.gle/tTMjKvZyyfAxGCXN7. Responses in 
the next two to three weeks are appreciated. Please let me know if you have any 
questions!

cheers,
-Kevin Clair, for the ArchivesSpace UAC Documentation Sub-team
___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


Re: [Archivesspace_Users_Group] Rights Statement on public user interface

2019-05-21 Thread Anderson, Freya N (EED)
Thanks Mark and MC!

We were just including a notes field in the rights statement, along with the 
required type and start date, and rarely an end date.  We're starting now to 
include the information in notes fields, but because there was a publish 
checkbox in the Rights Statement Note, we just assumed checking it would 
publish the information, and we had over 150 finding aids entered before anyone 
realized that they weren't actually showing up.  I know, I know.  Assuming is a 
bad thing.  We're doing a bit of a rush job, as we have a deadline for taking 
down our old PDFs.

I believe that LibraryHost.com, our host, has now put in a feature request to 
Lyrasis.  I'm not sure if they'd be willing or able to implement a plug-in.  It 
sounds wonderful if it were pretty easy all around.  Otherwise, we can make do 
with the notes fields.

As a side note, part of what threw me is that it seems counterintuitive to me, 
as a complete n00b, for the rights statement not to show up in the public 
interface.  I wonder if a note to this effect should be included in the help 
documentation?  Is this where I should mention this?  Or maybe something on the 
wiki? Or is it only counterintuitive to me?

Thanks again!
Freya

[cid:image001.png@01D507DF.8CCD64B0]

Freya Anderson
Head, Information Services
Acting Head, Historical Collections
phone: 907.465.1315

Andrew P. Kashevaroff Building
Mail: PO Box 110571, Juneau, AK 99811
Visit: 395 Whittier St., Juneau, AK 99801

email | web |  
sign up for event 
notifications | The 
Information Center for Government




From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 On Behalf Of Custer, 
Mark
Sent: Sunday, May 19, 2019 4:47 PM
To: Archivesspace Users Group 
Subject: Re: [Archivesspace_Users_Group] Rights Statement on public user 
interface

Dear Freya,

Unfortunately that's right.  The rights statement records (as well as some 
other record types, like event records) do not appear in the public interface 
as far as I'm aware.  They also do not get added to EAD exports (although they 
could, if there were a mapping for that data).

That said, the rights metadata is still available in the public interface for 
display, so that data could be surfaced with a plugin with any version of 
ArchivesSpace.  I don't know of any plugins offhand that do that (maybe others 
on the list do?), but I could come up with a pretty basic one if that's of 
interest.

There is lots of data that can be stored in an ArchivesSpace rights statements, 
though (such as linked agents, acts, external documents, and notes), which 
makes adding them to the PUI a bit of a design challenge. I'll also admit that 
I'm not very familiar with those records in ArchivesSpace. Right now, we only 
use  notes to capture that type of information.


Which fields are you using with your rights statements?

All my best,

Mark



From: 
archivesspace_users_group-boun...@lyralists.lyrasis.org
 
mailto:archivesspace_users_group-boun...@lyralists.lyrasis.org>>
 on behalf of Anderson, Freya N (EED) 
mailto:freya.ander...@alaska.gov>>
Sent: Tuesday, May 14, 2019 8:17 PM
To: Archivesspace Users Group
Subject: [Archivesspace_Users_Group] Rights Statement on public user interface


Hello all,



It was recently pointed out to me by one of our team, as we rush to get content 
into ArchivesSpace, that the Rights Statement doesn't show up on the public 
user interface.  We enter it in the Resource, mark every Publish box, and click 
Publish All at the end.  Everything looks fine at the staff end, but no rights 
statement appears for the public.  Is this the way it's supposed to work?  Is 
there anything we can do so the right statement shows up, other than adding it 
in the Notes?



We're using LibraryHost to host our instance.



Thanks!

Freya



[cid:image001.png@01D507DF.8CCD64B0]


Freya Anderson

Head, Information Services

Acting Head, Historical Collections

phone: 907.465.1315



Andrew P. Kashevaroff Building

Mail: PO Box 110571, Juneau, AK 99811

Visit: 395 Whittier St., Juneau, AK 99801



email | 
web
 |  sign up for event 

[Archivesspace_Users_Group] merge Top Container functionality

2019-05-21 Thread Benn Joseph
Hi all,
I'm curious whether there are any updates re: the ability to merge Top 
Containers. It looks like ANW-462 deals specifically with this but I can't tell 
if there has been much activity on it lately:

https://archivesspace.atlassian.net/browse/ANW-462?atlOrigin=eyJpIjoiZTc0MWQ4MDdlMjhjNGFlZmE4YTEwYzg3YTQ0ZmQzZTkiLCJwIjoiaiJ9

Thanks!
--Benn

Benn Joseph
Head of Archival Processing
Northwestern University Libraries
Northwestern University
www.library.northwestern.edu
benn.jos...@northwestern.edu
847.467.6581



-Original Message-
From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 On Behalf Of Tang, 
Lydia
Sent: Wednesday, January 16, 2019 12:18 PM
To: Archivesspace Users Group 
Subject: Re: [Archivesspace_Users_Group] REMINDER: Proposal for Container 
Management Enhancements - Call for Community Input

I second the ability to merge containers!  
Lydia

From:  on behalf of 
Valerie Addonizio 
Reply-To: Archivesspace Users Group 

Date: Wednesday, January 16, 2019 at 1:17 PM
To: Archivesspace Users Group 
Subject: Re: [Archivesspace_Users_Group] REMINDER: Proposal for Container 
Management Enhancements - Call for Community Input

I know that the comment period is closed, but this seemed like a logical place 
to ask whether the idea of container merging functionality was considered as 
part of this effort (I know it is not in the scope of work, but was it 
considered and not selected?) and whether other institutions are in need of 
such functionality?

From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
[mailto:archivesspace_users_group-boun...@lyralists.lyrasis.org] On Behalf Of 
Bowers, Kate A.
Sent: Thursday, December 20, 2018 4:16 PM
To: Archivesspace Users Group
Subject: Re: [Archivesspace_Users_Group] REMINDER: Proposal for Container 
Management Enhancements - Call for Community Input

Dear ArchivesSpace community:

Apologies for the length of this. I’ll try to address a lot of comments, but 
not all!  Please let me know if (as they may well do) my elaborations only 
elicit more questions!

In general, most of these proposals are derived from facing problems of scale. 
Harvard has 30 repositories and over 200 users of ArchivesSpace. Others have to 
do with managing “medium rare” materials’ locations and containers in 
ArchivesSpace.  (Medium-rare is a tongue-in-cheek term used to cover materials 
that might exist in multiple manifestations but that have archival or rare 
characteristics or treatment. Examples include entire libraries ingested into 
an archives, author’s own copies of their books, annotated books, or the record 
copy of serials or reports kept by institutional archives.)

 • Multi-field top container indicators
 Some commenters wondered if the multiple fields were to accommodate child 
containers. To clarify, the suggestion was to facilitate parsing top container 
identifiers.  As a few commenters have surmised, this is to cope with legacy 
numbers.  These are especially common on medium-rare materials.
 One suggestion was to use a sort algorithm that would obviate the need for 
separated fields for data. However, because there would be is more than one 
algorithm necessary over the installation, such a solution would require an 
added field to identify the algorithm and probably a third field retain a value 
derived by the algorithm to be sorted alphanumerically. Thus, the direct 
3-field solution seems simpler. (A 4-field suggestion was mooted in the 
committee as potentially more useful communally.) It does occur to me that 
there just might not be enough really old, really big repositories with lots of 
legacy identifiers in the ArchivesSpace community for the parsing of legacy 
numbers to be a common problem. I appreciate the recognition that a plug-in 
might be needed instead, but it would be worth hearing from any repositories 
with similar issues.

 • Container and location profiles by repository
   We were envisioning a one-to-one profile-to-repository scenario. Due to 
the ArchivesSpace staff user interface requirement that one identify only a 
single repository at login, it is extremely easy for users to forget the impact 
they might have beyond their repository if they change or delete a shared 
record.  We have already experienced mistaken mergers and deletions of agents 
due to the design of AS staff user interface that does not allow one to see 
where the record may be linked beyond their repository.  For this reason, it is 
wise to be able to limit changes and deletions of location profiles and 
container profiles impact to the same chosen repository.

 • Inactive
 As Maureen wisely intuited, inactive locations are necessary to recording 
a complete location history.  However, there are additional use cases. When a 
repository is renovating, for example (as is happening now at the Schlesinger 
Library) the shelves in a location may be inactive for a time and become active 
again when the building re-opens.  Other 

[Archivesspace_Users_Group] TAC Integrations Introduction

2019-05-21 Thread Galligan, Patrick
Hello,


I’m writing on behalf of the ArchivesSpace Technical Advisory Council’s 
Integrations sub-team. If you’re not familiar with our work, the Integrations 
subteam supports the ArchivesSpace community by taking a transparent approach 
to documenting and facilitating the integration of systems with the 
ArchivesSpace application. Some of our activities include:

  *   Tracking current integrations and communicating their status with the 
larger community
  *   Creating resources that assist members of the community with their 
integration work
  *   Act as liaisons between integration developers, the AS program team, and 
the community
  *   Act as general resources for those working on integrations


We’ve recently given our documentation a refresh -- please give it a look! You 
can read a little more about what we consider integrations on our “What Are 
Integrations?”
 page, and we have a few examples of existing integrations on our team’s 
“Integrations”
 page. We also have documentation about 
“How”
 and 
“Why”
 to integrate with ArchivesSpace if you’re thinking about or are currently 
working on a systems integration.


We’re reaching out today because we saw an opportunity to better support 
integrations work by reaching out to communities that might not already know 
about the work we do. We’re actively seeking more information about any past or 
current integrations work so that we can better share it out with others. If 
you’re working on an integration, we’d love to hear about it! Please either 
email us at 
as_tac_integrati...@lyralists.lyrasis.org
 or fill out our ArchivesSpace Integrations 
Form.


We’d really like to know what you’re working on or thinking about working on, 
and want to make sure as many people know of our existence as possible. We’re 
here to act as a resource and answer any questions you might have about how to 
go about integrating with ArchivesSpace. We may not always be able to answer 
your deepest technological questions, but we can definitely point you in the 
right direction.


Best,

Patrick Galligan


Oh behalf of Integrations sub-team

Jared Campbell

Megan Firestone

Edgar Garcia

Maggie Hughes

Dallas Pillen

Trevor Thornton

Gregory Wiedeman
___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


Re: [Archivesspace_Users_Group] Help with robots.txt

2019-05-21 Thread Swanson, Bob
Thank you so much!
I will comply.

Bob Swanson
UConn Libraries
860-486-5260 - Office
860-617-1188 - Mobile

From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 On Behalf Of Blake 
Carver
Sent: Tuesday, May 21, 2019 11:54 AM
To: Archivesspace Users Group 
Subject: Re: [Archivesspace_Users_Group] Help with robots.txt

Look at that!
https://github.com/archivesspace/archivesspace/commit/0bfb91e7f27a18b4cb6e0a27527be1041c877237#diff-f266d24dcc6fcbe9020ee4f31cf538f7
Yep, sure looks like that'll work as well.
So it seems like the easiest way to serve up a robots file is just throw it in 
your config directory.

From: 
archivesspace_users_group-boun...@lyralists.lyrasis.org
 
mailto:archivesspace_users_group-boun...@lyralists.lyrasis.org>>
 on behalf of Andrew Morrison 
mailto:andrew.morri...@bodleian.ox.ac.uk>>
Sent: Tuesday, May 21, 2019 11:37 AM
To: Archivesspace Users Group
Subject: Re: [Archivesspace_Users_Group] Help with robots.txt

Hello,

If you put a robots.txt file in the config folder of your ArchivesSpace system, 
it will be served by a request for /robots.txt, after the next restart. I 
cannot remember where I read that, and cannot find it now, but can confirm it 
works, since I believe 2.6.0.

Regards,

Andrew Morrison
Software Engineer
Bodleian Digital Library Systems and Services
https://www.bodleian.ox.ac.uk/bdlss


On Tue, 2019-05-21 at 13:59 +, Swanson, Bob wrote:

Please forgive me if this is posted twice, I sent the following yesterday 
before I submitted the "acceptance Email" to the ArchivesSpace Users Group.  I 
don't see where it was posted on the board (am I doing this correctly?).



So far as I can tell, this is how I'm supposed to ask questions regarding 
ArchivesSpace.

Please forgive and correct me if I'm going about this incorrectly.



I am new to ArchivesSpace, Ruby, JBOD and web development, so I'm pretty dumb.



The PUI Pre-Launch checklist advises creating and updating robots.txt,

So we would like to set up a robots.txt file to control what crawlers can 
access when they crawl our ArvhivesSpace site 
https://archivessearch.lib.uconn.edu/.

I understand that robots.txt is supposed to go in the web root directory of the 
website.

In a normal apache configuration that's simple enough.



But,

We are serving ArchivesSpace via HTTPS.

a)   All Port 80 traffic is redirected to Port 443.

b)  443 traffic is proxied to 8081 (for the public interface) per the 
ArchivesSpace documentation.

  RequestHeader set X-Forwarded-Proto "https"

  ProxyPreserveHost On

  ProxyPass / http://localhost:8081/ retry=1 acquire=3000 timeout=600 
Keepalive=on

  ProxyPassReverse / http://localhost:8081/

So, my web root directory (var/www/html) is empty (save some garbage left over 
from when I was testing).



I've read the documentation on 
www.robotstxt.org
 but I can't find anything that pertains to my situation.

I have to imagine that most ArchivesSpace sites are now https and use 
robots.txt, so this should be a somewhat a somewhat standard implementation.



I don not find much information on the Users Group site pertaining to this,

I find reference to plans for this being implemented at the web server level 
back in 2016,

But nothing beyond that.

http://lyralists.lyrasis.org/pipermail/archivesspace_users_group/2016-August/003916.html




Re: [Archivesspace_Users_Group] Help with robots.txt

2019-05-21 Thread Blake Carver
Look at that!
https://github.com/archivesspace/archivesspace/commit/0bfb91e7f27a18b4cb6e0a27527be1041c877237#diff-f266d24dcc6fcbe9020ee4f31cf538f7
Yep, sure looks like that'll work as well.
So it seems like the easiest way to serve up a robots file is just throw it in 
your config directory.

From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 on behalf of Andrew 
Morrison 
Sent: Tuesday, May 21, 2019 11:37 AM
To: Archivesspace Users Group
Subject: Re: [Archivesspace_Users_Group] Help with robots.txt

Hello,

If you put a robots.txt file in the config folder of your ArchivesSpace system, 
it will be served by a request for /robots.txt, after the next restart. I 
cannot remember where I read that, and cannot find it now, but can confirm it 
works, since I believe 2.6.0.

Regards,

Andrew Morrison
Software Engineer
Bodleian Digital Library Systems and Services
https://www.bodleian.ox.ac.uk/bdlss


On Tue, 2019-05-21 at 13:59 +, Swanson, Bob wrote:

Please forgive me if this is posted twice, I sent the following yesterday 
before I submitted the “acceptance Email” to the ArchivesSpace Users Group.  I 
don’t see where it was posted on the board (am I doing this correctly?).



So far as I can tell, this is how I’m supposed to ask questions regarding 
ArchivesSpace.

Please forgive and correct me if I’m going about this incorrectly.



I am new to ArchivesSpace, Ruby, JBOD and web development, so I’m pretty dumb.



The PUI Pre-Launch checklist advises creating and updating robots.txt,

So we would like to set up a robots.txt file to control what crawlers can 
access when they crawl our ArvhivesSpace site 
https://archivessearch.lib.uconn.edu/.

I understand that robots.txt is supposed to go in the web root directory of the 
website.

In a normal apache configuration that’s simple enough.



But,

We are serving ArchivesSpace via HTTPS.

a)   All Port 80 traffic is redirected to Port 443.

b)  443 traffic is proxied to 8081 (for the public interface) per the 
ArchivesSpace documentation.

  RequestHeader set X-Forwarded-Proto "https"

  ProxyPreserveHost On

  ProxyPass / http://localhost:8081/ retry=1 acquire=3000 timeout=600 
Keepalive=on

  ProxyPassReverse / http://localhost:8081/

So, my web root directory (var/www/html) is empty (save some garbage left over 
from when I was testing).



I’ve read the documentation on www.robotstxt.org but 
I can’t find anything that pertains to my situation.

I have to imagine that most ArchivesSpace sites are now https and use 
robots.txt, so this should be a somewhat a somewhat standard implementation.



I don not find much information on the Users Group site pertaining to this,

I find reference to plans for this being implemented at the web server level 
back in 2016,

But nothing beyond that.

http://lyralists.lyrasis.org/pipermail/archivesspace_users_group/2016-August/003916.html



A search of the ArchivesSpace Technical Documentation for “robots” comes up 
empty as well.



Can you please direct me to any documentation that may exist on setting up a 
robots.txt file in a proxied HTTPS instance of ArchviceSpace?

Thank you, and please tolerate my naivety.







Bob Swanson

UConn Libraries

860-486-5260 – Office

860-617-1188 - Mobile



___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group

___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


Re: [Archivesspace_Users_Group] Help with robots.txt

2019-05-21 Thread Andrew Morrison
Hello,

If you put a robots.txt file in the config folder of your ArchivesSpace system, 
it will be served by a request for /robots.txt, after the next restart. I 
cannot remember where I read that, and cannot find it now, but can confirm it 
works, since I believe 2.6.0.

Regards,

Andrew Morrison
Software Engineer
Bodleian Digital Library Systems and Services
https://www.bodleian.ox.ac.uk/bdlss


On Tue, 2019-05-21 at 13:59 +, Swanson, Bob wrote:
Please forgive me if this is posted twice, I sent the following yesterday 
before I submitted the “acceptance Email” to the ArchivesSpace Users Group.  I 
don’t see where it was posted on the board (am I doing this correctly?).

So far as I can tell, this is how I’m supposed to ask questions regarding 
ArchivesSpace.
Please forgive and correct me if I’m going about this incorrectly.

I am new to ArchivesSpace, Ruby, JBOD and web development, so I’m pretty dumb.

The PUI Pre-Launch checklist advises creating and updating robots.txt,
So we would like to set up a robots.txt file to control what crawlers can 
access when they crawl our ArvhivesSpace site 
https://archivessearch.lib.uconn.edu/.
I understand that robots.txt is supposed to go in the web root directory of the 
website.
In a normal apache configuration that’s simple enough.

But,
We are serving ArchivesSpace via HTTPS.

a)   All Port 80 traffic is redirected to Port 443.

b)  443 traffic is proxied to 8081 (for the public interface) per the 
ArchivesSpace documentation.
  RequestHeader set X-Forwarded-Proto "https"
  ProxyPreserveHost On
  ProxyPass / http://localhost:8081/ retry=1 acquire=3000 timeout=600 
Keepalive=on
  ProxyPassReverse / http://localhost:8081/
So, my web root directory (var/www/html) is empty (save some garbage left over 
from when I was testing).

I’ve read the documentation on www.robotstxt.org but 
I can’t find anything that pertains to my situation.
I have to imagine that most ArchivesSpace sites are now https and use 
robots.txt, so this should be a somewhat a somewhat standard implementation.

I don not find much information on the Users Group site pertaining to this,
I find reference to plans for this being implemented at the web server level 
back in 2016,
But nothing beyond that.
http://lyralists.lyrasis.org/pipermail/archivesspace_users_group/2016-August/003916.html

A search of the ArchivesSpace Technical Documentation for “robots” comes up 
empty as well.

Can you please direct me to any documentation that may exist on setting up a 
robots.txt file in a proxied HTTPS instance of ArchviceSpace?
Thank you, and please tolerate my naivety.



Bob Swanson
UConn Libraries
860-486-5260 – Office
860-617-1188 - Mobile


___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group

___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


Re: [Archivesspace_Users_Group] Help with robots.txt

2019-05-21 Thread Blake Carver
I'll be sure to add this one to the docs, so let me know if this works!

I think you'll need to do an alias, something like this for Apache:


 SetHandler None
 Require all granted

Alias /robots.txt /var/www/robots.txt

nginx, more like this:
  location /robots.txt {
alias /var/www/robots.txt;
  }



From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 on behalf of Swanson, 
Bob 
Sent: Tuesday, May 21, 2019 9:59 AM
To: archivesspace_users_group@lyralists.lyrasis.org
Subject: [Archivesspace_Users_Group] Help with robots.txt


Please forgive me if this is posted twice, I sent the following yesterday 
before I submitted the “acceptance Email” to the ArchivesSpace Users Group.  I 
don’t see where it was posted on the board (am I doing this correctly?).



So far as I can tell, this is how I’m supposed to ask questions regarding 
ArchivesSpace.

Please forgive and correct me if I’m going about this incorrectly.



I am new to ArchivesSpace, Ruby, JBOD and web development, so I’m pretty dumb.



The PUI Pre-Launch checklist advises creating and updating robots.txt,

So we would like to set up a robots.txt file to control what crawlers can 
access when they crawl our ArvhivesSpace site 
https://archivessearch.lib.uconn.edu/.

I understand that robots.txt is supposed to go in the web root directory of the 
website.

In a normal apache configuration that’s simple enough.



But,

We are serving ArchivesSpace via HTTPS.

a)   All Port 80 traffic is redirected to Port 443.

b)  443 traffic is proxied to 8081 (for the public interface) per the 
ArchivesSpace documentation.

  RequestHeader set X-Forwarded-Proto "https"

  ProxyPreserveHost On

  ProxyPass / http://localhost:8081/ retry=1 acquire=3000 timeout=600 
Keepalive=on

  ProxyPassReverse / http://localhost:8081/

So, my web root directory (var/www/html) is empty (save some garbage left over 
from when I was testing).



I’ve read the documentation on www.robotstxt.org but 
I can’t find anything that pertains to my situation.

I have to imagine that most ArchivesSpace sites are now https and use 
robots.txt, so this should be a somewhat a somewhat standard implementation.



I don not find much information on the Users Group site pertaining to this,

I find reference to plans for this being implemented at the web server level 
back in 2016,

But nothing beyond that.

http://lyralists.lyrasis.org/pipermail/archivesspace_users_group/2016-August/003916.html



A search of the ArchivesSpace Technical Documentation for “robots” comes up 
empty as well.



Can you please direct me to any documentation that may exist on setting up a 
robots.txt file in a proxied HTTPS instance of ArchviceSpace?

Thank you, and please tolerate my naivety.







Bob Swanson

UConn Libraries

860-486-5260 – Office

860-617-1188 - Mobile


___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


[Archivesspace_Users_Group] Help with robots.txt

2019-05-21 Thread Swanson, Bob
Please forgive me if this is posted twice, I sent the following yesterday 
before I submitted the "acceptance Email" to the ArchivesSpace Users Group.  I 
don't see where it was posted on the board (am I doing this correctly?).

So far as I can tell, this is how I'm supposed to ask questions regarding 
ArchivesSpace.
Please forgive and correct me if I'm going about this incorrectly.

I am new to ArchivesSpace, Ruby, JBOD and web development, so I'm pretty dumb.

The PUI Pre-Launch checklist advises creating and updating robots.txt,
So we would like to set up a robots.txt file to control what crawlers can 
access when they crawl our ArvhivesSpace site 
https://archivessearch.lib.uconn.edu/.
I understand that robots.txt is supposed to go in the web root directory of the 
website.
In a normal apache configuration that's simple enough.

But,
We are serving ArchivesSpace via HTTPS.

a)   All Port 80 traffic is redirected to Port 443.

b)  443 traffic is proxied to 8081 (for the public interface) per the 
ArchivesSpace documentation.
  RequestHeader set X-Forwarded-Proto "https"
  ProxyPreserveHost On
  ProxyPass / http://localhost:8081/ retry=1 acquire=3000 timeout=600 
Keepalive=on
  ProxyPassReverse / http://localhost:8081/
So, my web root directory (var/www/html) is empty (save some garbage left over 
from when I was testing).

I've read the documentation on www.robotstxt.org but 
I can't find anything that pertains to my situation.
I have to imagine that most ArchivesSpace sites are now https and use 
robots.txt, so this should be a somewhat a somewhat standard implementation.

I don not find much information on the Users Group site pertaining to this,
I find reference to plans for this being implemented at the web server level 
back in 2016,
But nothing beyond that.
http://lyralists.lyrasis.org/pipermail/archivesspace_users_group/2016-August/003916.html

A search of the ArchivesSpace Technical Documentation for "robots" comes up 
empty as well.

Can you please direct me to any documentation that may exist on setting up a 
robots.txt file in a proxied HTTPS instance of ArchviceSpace?
Thank you, and please tolerate my naivety.



Bob Swanson
UConn Libraries
860-486-5260 - Office
860-617-1188 - Mobile

___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group