Hi Ron, Hi everyone,

I had some similar requirements in the past and therefor created the 
sphinx-extension  "Sphinx-Collections".
https://sphinx-collections.readthedocs.io/

It is in beta status and I'm not sure if it covers all of your needs as 
your project setup sounds quite complex.

However, Sphinx-Collections collects files and folders from different 
locations and puts them in the source folder.
These files/folders can be created based on a path, a string, a jinja file, 
a python function, git repo and some more.

There is also a directive available, with "activates" rst content only, if 
a related collection has been used/integrated during the build:

.. if-collection:: my_test, my_data 
   
   .. toctree:: 
       
      /_collections/my_test/index

The needed collections get normally activated by using a configurable 
sphinx-tag in the current sphinx-build call.
Then the extension collects the data before the build starts and 
cleans/erases the collected data afterwards, so that for instance the git 
working tree stays clean.

I would be happy, if you can give it a try and get in contact with me, if 
anything is missing or not working.


Ron Stone schrieb am Freitag, 3. September 2021 um 14:44:46 UTC+2:

> I am currently working on a project that is attempting to ‘blend’ rst 
> files from four separate git repos into arbitrary doc sets. Each repo 
> represents a set of product documentation with its own tox/sphinx 
> environment. We do this by, for example, symlinking a doc from repo A into 
> B at runtime and using a combination of substitutions and include files to 
> handle most customizations. 
>
> It can get complicated. Sometimes we need to cross link files and modify 
> indexes, sometimes we want to add a document, but without certain files or 
> sections. Sometimes we need to link just a subset of files or a single file 
> from one repo into another. In some cases we need to link a doc from A (an 
> open source product) to C (a proprietary version) and then ‘add’ some 
> proprietary files to the linked content. Cross linking between repos is 
> achieved using intersphinx, but since one repo is hosted by an ‘upstream’ 
> open source partner, but contains content pulled into different downstream 
> repos, we parse objects.inv from that project and modify local copies of 
> the source files with the appropriate intersphinx tags.
>
> We run sphinx with the -W flag, so builds are picky about having exactly 
> the .rst files required - and no more - in scope at build time.
>
> This works, but I am looking for a cleaner approach. Two alternatives may 
> be:
>
>    - Create a ‘meta’ repo from the four current repos and use it as a 
>    single source of content for builds. Since this would expose unused 
> content 
>    to the builds, we would turn -W off and instead read build output into a 
>    loop, choosing which warnings to fail on. 
>    - Create a ‘meta’ repo from which we would pull the content required 
>    for a given build into a sphinx instance in a different location outside 
>    the repos, allowing us to continue to run strict builds (-W) and exposing 
>    only the artifacts required for a given build to sphinx.
>
> Has anyone dealt with a situation like this before? From my experience, 
> Sphinx is not really intended to be used in a CMS type context, where a 
> superset of content is mixed and matched into different subset 
> configurations for different builds. Are we missing something about its 
> behaviour that would allow a simpler approach? If not, has anyone come up 
> with alternatives to the two options above or have experience with either?
>
> Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"sphinx-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sphinx-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sphinx-users/563b7fa7-e5b4-4f3d-b90b-d93c27e69a97n%40googlegroups.com.

Reply via email to