Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package terraform for openSUSE:Factory 
checked in at 2022-09-30 17:58:28
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/terraform (Old)
 and      /work/SRC/openSUSE:Factory/.terraform.new.2275 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "terraform"

Fri Sep 30 17:58:28 2022 rev:41 rq:1007168 version:1.3.1

Changes:
--------
--- /work/SRC/openSUSE:Factory/terraform/terraform.changes      2022-09-27 
20:14:35.817914707 +0200
+++ /work/SRC/openSUSE:Factory/.terraform.new.2275/terraform.changes    
2022-09-30 17:58:50.229382349 +0200
@@ -1,0 +2,11 @@
+Fri Sep 30 05:13:08 UTC 2022 - Johannes Kastl <[email protected]>
+
+- update to 1.3.1:
+  * BUG FIXES:
+    - Fixed a crash when using objects with optional attributes and default 
values in collections, most visible with nested modules. (#31847)
+    - Prevent cycles in some situations where a provider depends on resources 
in the configuration which are participating in planned changes. (#31857)
+    - Fixed an error when attempting to destroy a configuration where 
resources do not exist in the state. (#31858)
+    - Data sources which cannot be read during will no longer prevent the 
state from being serialized. (#31871)
+    - Fixed a crash which occured when a resource with a precondition and/or a 
postcondition appeared inside a module with two or more instances. (#31860)
+
+-------------------------------------------------------------------

Old:
----
  terraform-1.3.0.obscpio
  terraform-1.3.0.tar.gz

New:
----
  terraform-1.3.1.obscpio
  terraform-1.3.1.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ terraform.spec ++++++
--- /var/tmp/diff_new_pack.6a1BDT/_old  2022-09-30 17:58:51.349384743 +0200
+++ /var/tmp/diff_new_pack.6a1BDT/_new  2022-09-30 17:58:51.357384760 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           terraform
-Version:        1.3.0
+Version:        1.3.1
 Release:        0
 Summary:        Tool for building infrastructure safely and efficiently
 License:        MPL-2.0

++++++ _service ++++++
--- /var/tmp/diff_new_pack.6a1BDT/_old  2022-09-30 17:58:51.397384846 +0200
+++ /var/tmp/diff_new_pack.6a1BDT/_new  2022-09-30 17:58:51.401384854 +0200
@@ -3,8 +3,8 @@
     <param name="url">https://github.com/hashicorp/terraform</param>
     <param name="scm">git</param>
     <param name="filename">terraform</param>
-    <param name="versionformat">1.3.0</param>
-    <param name="revision">v1.3.0</param>
+    <param name="versionformat">1.3.1</param>
+    <param name="revision">v1.3.1</param>
     <param name="exclude">.git</param>
   </service>
   <service name="tar" mode="disabled"/>
@@ -16,7 +16,7 @@
     <param name="basename">terraform</param>
   </service>
   <service name="go_modules" mode="disabled">
-    <param name="archive">terraform-1.3.0.tar.gz</param>
+    <param name="archive">terraform-1.3.1.tar.gz</param>
   </service>
 </services>
 

++++++ terraform-1.3.0.obscpio -> terraform-1.3.1.obscpio ++++++
/work/SRC/openSUSE:Factory/terraform/terraform-1.3.0.obscpio 
/work/SRC/openSUSE:Factory/.terraform.new.2275/terraform-1.3.1.obscpio differ: 
char 49, line 1

++++++ terraform-1.3.0.tar.gz -> terraform-1.3.1.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/terraform-1.3.0/CHANGELOG.md 
new/terraform-1.3.1/CHANGELOG.md
--- old/terraform-1.3.0/CHANGELOG.md    2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/CHANGELOG.md    2022-09-28 15:46:33.000000000 +0200
@@ -1,3 +1,16 @@
+## 1.3.1 (September 28, 2022)
+
+NOTE:
+* On `darwin/amd64` and `darwin/arm64` architectures, `terraform` binaries are 
now built with CGO enabled. This should not have any user-facing impact, except 
in cases where the pure Go DNS resolver causes problems on recent versions of 
macOS: using CGO may mitigate these issues. Please see the upstream bug 
https://github.com/golang/go/issues/52839 for more details.
+
+BUG FIXES:
+
+* Fixed a crash when using objects with optional attributes and default values 
in collections, most visible with nested modules. 
([#31847](https://github.com/hashicorp/terraform/issues/31847))
+* Prevent cycles in some situations where a provider depends on resources in 
the configuration which are participating in planned changes. 
([#31857](https://github.com/hashicorp/terraform/issues/31857))
+* Fixed an error when attempting to destroy a configuration where resources do 
not exist in the state. 
([#31858](https://github.com/hashicorp/terraform/issues/31858))
+* Data sources which cannot be read during will no longer prevent the state 
from being serialized. 
([#31871](https://github.com/hashicorp/terraform/issues/31871))
+* Fixed a crash which occured when a resource with a precondition and/or a 
postcondition appeared inside a module with two or more instances. 
([#31860](https://github.com/hashicorp/terraform/issues/31860))
+
 ## 1.3.0 (September 21, 2022)
 
 NEW FEATURES:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/terraform-1.3.0/go.mod new/terraform-1.3.1/go.mod
--- old/terraform-1.3.0/go.mod  2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/go.mod  2022-09-28 15:46:33.000000000 +0200
@@ -42,7 +42,7 @@
        github.com/hashicorp/go-uuid v1.0.3
        github.com/hashicorp/go-version v1.6.0
        github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f
-       github.com/hashicorp/hcl/v2 v2.14.0
+       github.com/hashicorp/hcl/v2 v2.14.1
        github.com/hashicorp/terraform-config-inspect 
v0.0.0-20210209133302-4fd17a0faac2
        github.com/hashicorp/terraform-registry-address 
v0.0.0-20220623143253-7d51757b572c
        github.com/hashicorp/terraform-svchost 
v0.0.0-20200729002733-f050f53b9734
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/terraform-1.3.0/go.sum new/terraform-1.3.1/go.sum
--- old/terraform-1.3.0/go.sum  2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/go.sum  2022-09-28 15:46:33.000000000 +0200
@@ -387,8 +387,8 @@
 github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f 
h1:UdxlrJz4JOnY8W+DbLISwf2B8WXEolNRA8BGCwI9jws=
 github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f/go.mod 
h1:oZtUIOe8dh44I2q6ScRibXws4Ajl+d+nod3AaR9vL5w=
 github.com/hashicorp/hcl/v2 v2.0.0/go.mod 
h1:oVVDG71tEinNGYCxinCYadcmKU9bglqW9pV3txagJ90=
-github.com/hashicorp/hcl/v2 v2.14.0 
h1:jX6+Q38Ly9zaAJlAjnFVyeNSNCKKW8D0wvyg7vij5Wc=
-github.com/hashicorp/hcl/v2 v2.14.0/go.mod 
h1:e4z5nxYlWNPdDSNYX+ph14EvWYMFm3eP0zIUqPc2jr0=
+github.com/hashicorp/hcl/v2 v2.14.1 
h1:x0BpjfZ+CYdbiz+8yZTQ+gdLO7IXvOut7Da+XJayx34=
+github.com/hashicorp/hcl/v2 v2.14.1/go.mod 
h1:e4z5nxYlWNPdDSNYX+ph14EvWYMFm3eP0zIUqPc2jr0=
 github.com/hashicorp/jsonapi v0.0.0-20210826224640-ee7dae0fb22d 
h1:9ARUJJ1VVynB176G1HCwleORqCaXm/Vx0uUi0dL26I0=
 github.com/hashicorp/jsonapi v0.0.0-20210826224640-ee7dae0fb22d/go.mod 
h1:Yog5+CPEM3c99L1CL2CFCYoSzgWm5vTU58idbRUaLik=
 github.com/hashicorp/logutils v1.0.0/go.mod 
h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/terraform-1.3.0/internal/dag/graph.go 
new/terraform-1.3.1/internal/dag/graph.go
--- old/terraform-1.3.0/internal/dag/graph.go   2022-09-21 15:40:32.000000000 
+0200
+++ new/terraform-1.3.1/internal/dag/graph.go   2022-09-28 15:46:33.000000000 
+0200
@@ -230,6 +230,28 @@
        s.Add(source)
 }
 
+// Subsume imports all of the nodes and edges from the given graph into the
+// reciever, leaving the given graph unchanged.
+//
+// If any of the nodes in the given graph are already present in the reciever
+// then the existing node will be retained and any new edges from the given
+// graph will be connected with it.
+//
+// If the given graph has edges in common with the reciever then they will be
+// ignored, because each pair of nodes can only be connected once.
+func (g *Graph) Subsume(other *Graph) {
+       // We're using Set.Filter just as a "visit each element" here, so we're
+       // not doing anything with the result (which will always be empty).
+       other.vertices.Filter(func(i interface{}) bool {
+               g.Add(i)
+               return false
+       })
+       other.edges.Filter(func(i interface{}) bool {
+               g.Connect(i.(Edge))
+               return false
+       })
+}
+
 // String outputs some human-friendly output for the graph structure.
 func (g *Graph) StringWithNodeTypes() string {
        var buf bytes.Buffer
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/context_apply2_test.go 
new/terraform-1.3.1/internal/terraform/context_apply2_test.go
--- old/terraform-1.3.0/internal/terraform/context_apply2_test.go       
2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/context_apply2_test.go       
2022-09-28 15:46:33.000000000 +0200
@@ -1201,6 +1201,25 @@
        state, diags := ctx.Apply(plan, m)
        assertNoErrors(t, diags)
 
+       // Resource changes which have dependencies across providers which
+       // themselves depend on resources can result in cycles.
+       // Because other_object transitively depends on the module resources
+       // through its provider, we trigger changes on both sides of this 
boundary
+       // to ensure we can create a valid plan.
+       //
+       // Taint the object to make sure a replacement works in the plan.
+       otherObjAddr := mustResourceInstanceAddr("other_object.other")
+       otherObj := state.ResourceInstance(otherObjAddr)
+       otherObj.Current.Status = states.ObjectTainted
+       // Force a change which needs to be reverted.
+       testObjAddr := mustResourceInstanceAddr(`module.mod["a"].test_object.a`)
+       testObjA := state.ResourceInstance(testObjAddr)
+       testObjA.Current.AttrsJSON = 
[]byte(`{"test_bool":null,"test_list":null,"test_map":null,"test_number":null,"test_string":"changed"}`)
+
+       _, diags = ctx.Plan(m, state, opts)
+       assertNoErrors(t, diags)
+       return
+
        otherProvider.ConfigureProviderCalled = false
        otherProvider.ConfigureProviderFn = func(req 
providers.ConfigureProviderRequest) (resp providers.ConfigureProviderResponse) {
                // check that our config is complete, even during a destroy plan
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/context_apply_test.go 
new/terraform-1.3.1/internal/terraform/context_apply_test.go
--- old/terraform-1.3.0/internal/terraform/context_apply_test.go        
2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/context_apply_test.go        
2022-09-28 15:46:33.000000000 +0200
@@ -12165,6 +12165,70 @@
        }
 }
 
+func TestContext2Apply_moduleVariableOptionalAttributesDefaultChild(t 
*testing.T) {
+       m := testModuleInline(t, map[string]string{
+               "main.tf": `
+variable "in" {
+  type    = list(object({
+    a = optional(set(string))
+  }))
+  default = [
+       { a = [ "foo" ] },
+       { },
+  ]
+}
+
+module "child" {
+  source = "./child"
+  in     = var.in
+}
+
+output "out" {
+  value = module.child.out
+}
+`,
+               "child/main.tf": `
+variable "in" {
+  type    = list(object({
+    a = optional(set(string), [])
+  }))
+  default = []
+}
+
+output "out" {
+  value = var.in
+}
+`,
+       })
+
+       ctx := testContext2(t, &ContextOpts{})
+
+       // We don't specify a value for the variable here, relying on its 
defined
+       // default.
+       plan, diags := ctx.Plan(m, states.NewState(), 
SimplePlanOpts(plans.NormalMode, testInputValuesUnset(m.Module.Variables)))
+       if diags.HasErrors() {
+               t.Fatal(diags.ErrWithWarnings())
+       }
+
+       state, diags := ctx.Apply(plan, m)
+       if diags.HasErrors() {
+               t.Fatal(diags.ErrWithWarnings())
+       }
+
+       got := state.RootModule().OutputValues["out"].Value
+       want := cty.ListVal([]cty.Value{
+               cty.ObjectVal(map[string]cty.Value{
+                       "a": cty.SetVal([]cty.Value{cty.StringVal("foo")}),
+               }),
+               cty.ObjectVal(map[string]cty.Value{
+                       "a": cty.SetValEmpty(cty.String),
+               }),
+       })
+       if !want.RawEquals(got) {
+               t.Fatalf("wrong result\ngot:  %#v\nwant: %#v", got, want)
+       }
+}
+
 func TestContext2Apply_provisionerSensitive(t *testing.T) {
        m := testModule(t, "apply-provisioner-sensitive")
        p := testProvider("aws")
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/terraform-1.3.0/internal/terraform/context_import.go 
new/terraform-1.3.1/internal/terraform/context_import.go
--- old/terraform-1.3.0/internal/terraform/context_import.go    2022-09-21 
15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/context_import.go    2022-09-28 
15:46:33.000000000 +0200
@@ -82,6 +82,11 @@
                return state, diags
        }
 
+       // Data sources which could not be read during the import plan will be
+       // unknown. We need to strip those objects out so that the state can be
+       // serialized.
+       walker.State.RemovePlannedResourceInstanceObjects()
+
        newState := walker.State.Close()
        return newState, diags
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/context_import_test.go 
new/terraform-1.3.1/internal/terraform/context_import_test.go
--- old/terraform-1.3.0/internal/terraform/context_import_test.go       
2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/context_import_test.go       
2022-09-28 15:46:33.000000000 +0200
@@ -430,7 +430,24 @@
 
 func TestContextImport_refresh(t *testing.T) {
        p := testProvider("aws")
-       m := testModule(t, "import-provider")
+       m := testModuleInline(t, map[string]string{
+               "main.tf": `
+provider "aws" {
+  foo = "bar"
+}
+
+resource "aws_instance" "foo" {
+}
+
+
+// we are only importing aws_instance.foo, so these resources will be unknown
+resource "aws_instance" "bar" {
+}
+data "aws_data_source" "bar" {
+  foo = aws_instance.bar.id
+}
+`})
+
        ctx := testContext2(t, &ContextOpts{
                Providers: map[addrs.Provider]providers.Factory{
                        addrs.NewDefaultProvider("aws"): 
testProviderFuncFixed(p),
@@ -448,6 +465,13 @@
                },
        }
 
+       p.ReadDataSourceResponse = &providers.ReadDataSourceResponse{
+               State: cty.ObjectVal(map[string]cty.Value{
+                       "id":  cty.StringVal("id"),
+                       "foo": cty.UnknownVal(cty.String),
+               }),
+       }
+
        p.ReadResourceFn = nil
 
        p.ReadResourceResponse = &providers.ReadResourceResponse{
@@ -471,6 +495,10 @@
                t.Fatalf("unexpected errors: %s", diags.Err())
        }
 
+       if d := 
state.ResourceInstance(mustResourceInstanceAddr("data.aws_data_source.bar")); d 
!= nil {
+               t.Errorf("data.aws_data_source.bar has a status of 
ObjectPlanned and should not be in the state\ngot:%#v\n", d.Current)
+       }
+
        actual := strings.TrimSpace(state.String())
        expected := strings.TrimSpace(testImportRefreshStr)
        if actual != expected {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/context_plan2_test.go 
new/terraform-1.3.1/internal/terraform/context_plan2_test.go
--- old/terraform-1.3.0/internal/terraform/context_plan2_test.go        
2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/context_plan2_test.go        
2022-09-28 15:46:33.000000000 +0200
@@ -401,6 +401,129 @@
        }
 }
 
+func TestContext2Plan_resourceChecksInExpandedModule(t *testing.T) {
+       // When a resource is in a nested module we have two levels of expansion
+       // to do: first expand the module the resource is declared in, and then
+       // expand the resource itself.
+       //
+       // In earlier versions of Terraform we did that expansion as two levels
+       // of DynamicExpand, which led to a bug where we didn't have any central
+       // location from which to register all of the instances of a checkable
+       // resource.
+       //
+       // We now handle the full expansion all in one graph node and one 
dynamic
+       // subgraph, which avoids the problem. This is a regression test for the
+       // earlier bug. If this test is panicking with "duplicate checkable 
objects
+       // report" then that suggests the bug is reintroduced and we're now back
+       // to reporting each module instance separately again, which is 
incorrect.
+
+       p := testProvider("test")
+       p.GetProviderSchemaResponse = &providers.GetProviderSchemaResponse{
+               Provider: providers.Schema{
+                       Block: &configschema.Block{},
+               },
+               ResourceTypes: map[string]providers.Schema{
+                       "test": {
+                               Block: &configschema.Block{},
+                       },
+               },
+       }
+       p.ReadResourceFn = func(req providers.ReadResourceRequest) (resp 
providers.ReadResourceResponse) {
+               resp.NewState = req.PriorState
+               return resp
+       }
+       p.PlanResourceChangeFn = func(req providers.PlanResourceChangeRequest) 
(resp providers.PlanResourceChangeResponse) {
+               resp.PlannedState = cty.EmptyObjectVal
+               return resp
+       }
+       p.ApplyResourceChangeFn = func(req 
providers.ApplyResourceChangeRequest) (resp 
providers.ApplyResourceChangeResponse) {
+               resp.NewState = req.PlannedState
+               return resp
+       }
+
+       m := testModuleInline(t, map[string]string{
+               "main.tf": `
+                       module "child" {
+                               source = "./child"
+                               count = 2 # must be at least 2 for this test to 
be valid
+                       }
+               `,
+               "child/child.tf": `
+                       locals {
+                               a = "a"
+                       }
+
+                       resource "test" "test1" {
+                               lifecycle {
+                                       postcondition {
+                                               # It doesn't matter what this 
checks as long as it
+                                               # passes, because if we don't 
handle expansion properly
+                                               # then we'll crash before we 
even get to evaluating this.
+                                               condition = local.a == local.a
+                                               error_message = "Postcondition 
failed."
+                                       }
+                               }
+                       }
+
+                       resource "test" "test2" {
+                               count = 2
+
+                               lifecycle {
+                                       postcondition {
+                                               # It doesn't matter what this 
checks as long as it
+                                               # passes, because if we don't 
handle expansion properly
+                                               # then we'll crash before we 
even get to evaluating this.
+                                               condition = local.a == local.a
+                                               error_message = "Postcondition 
failed."
+                                       }
+                               }
+                       }
+               `,
+       })
+
+       ctx := testContext2(t, &ContextOpts{
+               Providers: map[addrs.Provider]providers.Factory{
+                       addrs.NewDefaultProvider("test"): 
testProviderFuncFixed(p),
+               },
+       })
+
+       priorState := states.NewState()
+       plan, diags := ctx.Plan(m, priorState, DefaultPlanOpts)
+       assertNoErrors(t, diags)
+
+       resourceInsts := []addrs.AbsResourceInstance{
+               mustResourceInstanceAddr("module.child[0].test.test1"),
+               mustResourceInstanceAddr("module.child[0].test.test2[0]"),
+               mustResourceInstanceAddr("module.child[0].test.test2[1]"),
+               mustResourceInstanceAddr("module.child[1].test.test1"),
+               mustResourceInstanceAddr("module.child[1].test.test2[0]"),
+               mustResourceInstanceAddr("module.child[1].test.test2[1]"),
+       }
+
+       for _, instAddr := range resourceInsts {
+               t.Run(fmt.Sprintf("results for %s", instAddr), func(t 
*testing.T) {
+                       if rc := plan.Changes.ResourceInstance(instAddr); rc != 
nil {
+                               if got, want := rc.Action, plans.Create; got != 
want {
+                                       t.Errorf("wrong action for %s\ngot:  
%s\nwant: %s", instAddr, got, want)
+                               }
+                               if got, want := rc.ActionReason, 
plans.ResourceInstanceChangeNoReason; got != want {
+                                       t.Errorf("wrong action reason for 
%s\ngot:  %s\nwant: %s", instAddr, got, want)
+                               }
+                       } else {
+                               t.Errorf("no planned change for %s", instAddr)
+                       }
+
+                       if checkResult := 
plan.Checks.GetObjectResult(instAddr); checkResult != nil {
+                               if got, want := checkResult.Status, 
checks.StatusPass; got != want {
+                                       t.Errorf("wrong check status for 
%s\ngot:  %s\nwant: %s", instAddr, got, want)
+                               }
+                       } else {
+                               t.Errorf("no check result for %s", instAddr)
+                       }
+               })
+       }
+}
+
 func TestContext2Plan_dataResourceChecksManagedResourceChange(t *testing.T) {
        // This tests the situation where the remote system contains data that
        // isn't valid per a data resource postcondition, but that the
@@ -3518,3 +3641,44 @@
                t.Fatalf("no cycle error found:\n got: %s\n", msg)
        }
 }
+
+// plan a destroy with no state where configuration could fail to evaluate
+// expansion indexes.
+func TestContext2Plan_emptyDestroy(t *testing.T) {
+       m := testModuleInline(t, map[string]string{
+               "main.tf": `
+locals {
+  enable = true
+  value  = local.enable ? module.example[0].out : null
+}
+
+module "example" {
+  count  = local.enable ? 1 : 0
+  source = "./example"
+}
+`,
+               "example/main.tf": `
+resource "test_resource" "x" {
+}
+
+output "out" {
+  value = test_resource.x
+}
+`,
+       })
+
+       p := testProvider("test")
+       state := states.NewState()
+
+       ctx := testContext2(t, &ContextOpts{
+               Providers: map[addrs.Provider]providers.Factory{
+                       addrs.NewDefaultProvider("test"): 
testProviderFuncFixed(p),
+               },
+       })
+
+       _, diags := ctx.Plan(m, state, &PlanOpts{
+               Mode: plans.DestroyMode,
+       })
+
+       assertNoErrors(t, diags)
+}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/context_plan_test.go 
new/terraform-1.3.1/internal/terraform/context_plan_test.go
--- old/terraform-1.3.0/internal/terraform/context_plan_test.go 2022-09-21 
15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/context_plan_test.go 2022-09-28 
15:46:33.000000000 +0200
@@ -4536,7 +4536,20 @@
 func TestContext2Plan_ignoreChangesWildcard(t *testing.T) {
        m := testModule(t, "plan-ignore-changes-wildcard")
        p := testProvider("aws")
-       p.PlanResourceChangeFn = testDiffFn
+       p.PlanResourceChangeFn = func(req providers.PlanResourceChangeRequest) 
(resp providers.PlanResourceChangeResponse) {
+               // computed attributes should not be set in config
+               id := req.Config.GetAttr("id")
+               if !id.IsNull() {
+                       t.Error("computed id set in plan config")
+               }
+
+               foo := req.Config.GetAttr("foo")
+               if foo.IsNull() {
+                       t.Error(`missing "foo" during plan, was set to "bar" in 
state and config`)
+               }
+
+               return testDiffFn(req)
+       }
 
        state := states.NewState()
        root := state.EnsureModule(addrs.RootModuleInstance)
@@ -4544,7 +4557,7 @@
                mustResourceInstanceAddr("aws_instance.foo").Resource,
                &states.ResourceInstanceObjectSrc{
                        Status:    states.ObjectReady,
-                       AttrsJSON: 
[]byte(`{"id":"bar","ami":"ami-abcd1234","instance":"t2.micro","type":"aws_instance"}`),
+                       AttrsJSON: 
[]byte(`{"id":"bar","ami":"ami-abcd1234","instance":"t2.micro","type":"aws_instance","foo":"bar"}`),
                },
                
mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
        )
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/terraform-1.3.0/internal/terraform/graph.go 
new/terraform-1.3.1/internal/terraform/graph.go
--- old/terraform-1.3.0/internal/terraform/graph.go     2022-09-21 
15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/graph.go     2022-09-28 
15:46:33.000000000 +0200
@@ -82,8 +82,9 @@
                        log.Printf("[TRACE] vertex %q: expanding dynamic 
subgraph", dag.VertexName(v))
 
                        g, err := ev.DynamicExpand(vertexCtx)
-                       if err != nil {
-                               diags = diags.Append(err)
+                       diags = diags.Append(err)
+                       if diags.HasErrors() {
+                               log.Printf("[TRACE] vertex %q: failed expanding 
dynamic subgraph: %s", dag.VertexName(v), err)
                                return
                        }
                        if g != nil {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/graph_builder_plan.go 
new/terraform-1.3.1/internal/terraform/graph_builder_plan.go
--- old/terraform-1.3.0/internal/terraform/graph_builder_plan.go        
2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/graph_builder_plan.go        
2022-09-28 15:46:33.000000000 +0200
@@ -170,6 +170,10 @@
                // TargetsTransformer can determine which nodes to keep in the 
graph.
                &DestroyEdgeTransformer{},
 
+               &pruneUnusedNodesTransformer{
+                       skip: b.Operation != walkPlanDestroy,
+               },
+
                // Target
                &TargetsTransformer{Targets: b.Targets},
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/node_resource_abstract_instance.go 
new/terraform-1.3.1/internal/terraform/node_resource_abstract_instance.go
--- old/terraform-1.3.0/internal/terraform/node_resource_abstract_instance.go   
2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/node_resource_abstract_instance.go   
2022-09-28 15:46:33.000000000 +0200
@@ -777,7 +777,7 @@
        // starting values.
        // Here we operate on the marked values, so as to revert any changes to 
the
        // marks as well as the value.
-       configValIgnored, ignoreChangeDiags := n.processIgnoreChanges(priorVal, 
origConfigVal)
+       configValIgnored, ignoreChangeDiags := n.processIgnoreChanges(priorVal, 
origConfigVal, schema)
        diags = diags.Append(ignoreChangeDiags)
        if ignoreChangeDiags.HasErrors() {
                return plan, state, keyData, diags
@@ -881,7 +881,7 @@
                // providers that we must accommodate the behavior for now, so 
for
                // ignore_changes to work at all on these values, we will 
revert the
                // ignored values once more.
-               plannedNewVal, ignoreChangeDiags = 
n.processIgnoreChanges(unmarkedPriorVal, plannedNewVal)
+               plannedNewVal, ignoreChangeDiags = 
n.processIgnoreChanges(unmarkedPriorVal, plannedNewVal, schema)
                diags = diags.Append(ignoreChangeDiags)
                if ignoreChangeDiags.HasErrors() {
                        return plan, state, keyData, diags
@@ -1145,7 +1145,7 @@
        return plan, state, keyData, diags
 }
 
-func (n *NodeAbstractResource) processIgnoreChanges(prior, config cty.Value) 
(cty.Value, tfdiags.Diagnostics) {
+func (n *NodeAbstractResource) processIgnoreChanges(prior, config cty.Value, 
schema *configschema.Block) (cty.Value, tfdiags.Diagnostics) {
        // ignore_changes only applies when an object already exists, since we
        // can't ignore changes to a thing we've not created yet.
        if prior.IsNull() {
@@ -1158,9 +1158,23 @@
        if len(ignoreChanges) == 0 && !ignoreAll {
                return config, nil
        }
+
        if ignoreAll {
-               return prior, nil
+               // If we are trying to ignore all attribute changes, we must 
filter
+               // computed attributes out from the prior state to avoid 
sending them
+               // to the provider as if they were included in the 
configuration.
+               ret, _ := cty.Transform(prior, func(path cty.Path, v cty.Value) 
(cty.Value, error) {
+                       attr := schema.AttributeByPath(path)
+                       if attr != nil && attr.Computed && !attr.Optional {
+                               return cty.NullVal(v.Type()), nil
+                       }
+
+                       return v, nil
+               })
+
+               return ret, nil
        }
+
        if prior.IsNull() || config.IsNull() {
                // Ignore changes doesn't apply when we're creating for the 
first time.
                // Proposed should never be null here, but if it is then we'll 
just let it be.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/node_resource_plan.go 
new/terraform-1.3.1/internal/terraform/node_resource_plan.go
--- old/terraform-1.3.0/internal/terraform/node_resource_plan.go        
2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/node_resource_plan.go        
2022-09-28 15:46:33.000000000 +0200
@@ -11,10 +11,9 @@
        "github.com/hashicorp/terraform/internal/tfdiags"
 )
 
-// nodeExpandPlannableResource handles the first layer of resource
-// expansion.  We need this extra layer so DynamicExpand is called twice for
-// the resource, the first to expand the Resource for each module instance, and
-// the second to expand each ResourceInstance for the expanded Resources.
+// nodeExpandPlannableResource represents an addrs.ConfigResource and 
implements
+// DynamicExpand to a subgraph containing all of the addrs.AbsResourceInstance
+// resulting from both the containing module and resource-specific expansion.
 type nodeExpandPlannableResource struct {
        *NodeAbstractResource
 
@@ -53,12 +52,16 @@
        _ GraphNodeAttachResourceConfig = (*nodeExpandPlannableResource)(nil)
        _ GraphNodeAttachDependencies   = (*nodeExpandPlannableResource)(nil)
        _ GraphNodeTargetable           = (*nodeExpandPlannableResource)(nil)
+       _ graphNodeExpandsInstances     = (*nodeExpandPlannableResource)(nil)
 )
 
 func (n *nodeExpandPlannableResource) Name() string {
        return n.NodeAbstractResource.Name() + " (expand)"
 }
 
+func (n *nodeExpandPlannableResource) expandsInstances() {
+}
+
 // GraphNodeAttachDependencies
 func (n *nodeExpandPlannableResource) AttachDependencies(deps 
[]addrs.ConfigResource) {
        n.dependencies = deps
@@ -90,23 +93,8 @@
        expander := ctx.InstanceExpander()
        moduleInstances := expander.ExpandModule(n.Addr.Module)
 
-       // Add the current expanded resource to the graph
-       for _, module := range moduleInstances {
-               resAddr := n.Addr.Resource.Absolute(module)
-               g.Add(&NodePlannableResource{
-                       NodeAbstractResource:     n.NodeAbstractResource,
-                       Addr:                     resAddr,
-                       ForceCreateBeforeDestroy: n.ForceCreateBeforeDestroy,
-                       dependencies:             n.dependencies,
-                       skipRefresh:              n.skipRefresh,
-                       skipPlanChanges:          n.skipPlanChanges,
-                       forceReplace:             n.forceReplace,
-               })
-       }
-
        // Lock the state while we inspect it
        state := ctx.State().Lock()
-       defer ctx.State().Unlock()
 
        var orphans []*states.Resource
        for _, res := range state.Resources(n.Addr) {
@@ -117,12 +105,18 @@
                                break
                        }
                }
-               // Address form state was not found in the current config
+               // The module instance of the resource in the state doesn't 
exist
+               // in the current config, so this whole resource is orphaned.
                if !found {
                        orphans = append(orphans, res)
                }
        }
 
+       // We'll no longer use the state directly here, and the other functions
+       // we'll call below may use it so we'll release the lock.
+       state = nil
+       ctx.State().Unlock()
+
        // The concrete resource factory we'll use for orphans
        concreteResourceOrphan := func(a *NodeAbstractResourceInstance) 
*NodePlannableResourceInstanceOrphan {
                // Add the config and state since we don't do that via 
transforms
@@ -150,72 +144,68 @@
                }
        }
 
-       return &g, nil
-}
-
-// NodePlannableResource represents a resource that is "plannable":
-// it is ready to be planned in order to create a diff.
-type NodePlannableResource struct {
-       *NodeAbstractResource
-
-       Addr addrs.AbsResource
-
-       // ForceCreateBeforeDestroy might be set via our GraphNodeDestroyerCBD
-       // during graph construction, if dependencies require us to force this
-       // on regardless of what the configuration says.
-       ForceCreateBeforeDestroy *bool
-
-       // skipRefresh indicates that we should skip refreshing individual 
instances
-       skipRefresh bool
-
-       // skipPlanChanges indicates we should skip trying to plan change 
actions
-       // for any instances.
-       skipPlanChanges bool
-
-       // forceReplace are resource instance addresses where the user wants to
-       // force generating a replace action. This set isn't pre-filtered, so
-       // it might contain addresses that have nothing to do with the resource
-       // that this node represents, which the node itself must therefore 
ignore.
-       forceReplace []addrs.AbsResourceInstance
-
-       dependencies []addrs.ConfigResource
-}
-
-var (
-       _ GraphNodeModuleInstance       = (*NodePlannableResource)(nil)
-       _ GraphNodeDestroyerCBD         = (*NodePlannableResource)(nil)
-       _ GraphNodeDynamicExpandable    = (*NodePlannableResource)(nil)
-       _ GraphNodeReferenceable        = (*NodePlannableResource)(nil)
-       _ GraphNodeReferencer           = (*NodePlannableResource)(nil)
-       _ GraphNodeConfigResource       = (*NodePlannableResource)(nil)
-       _ GraphNodeAttachResourceConfig = (*NodePlannableResource)(nil)
-)
+       // The above dealt with the expansion of the containing module, so now
+       // we need to deal with the expansion of the resource itself across all
+       // instances of the module.
+       //
+       // We'll gather up all of the leaf instances we learn about along the 
way
+       // so that we can inform the checks subsystem of which instances it 
should
+       // be expecting check results for, below.
+       var diags tfdiags.Diagnostics
+       instAddrs := addrs.MakeSet[addrs.Checkable]()
+       for _, module := range moduleInstances {
+               resAddr := n.Addr.Resource.Absolute(module)
+               err := n.expandResourceInstances(ctx, resAddr, &g, instAddrs)
+               diags = diags.Append(err)
+       }
+       if diags.HasErrors() {
+               return nil, diags.ErrWithWarnings()
+       }
 
-func (n *NodePlannableResource) Path() addrs.ModuleInstance {
-       return n.Addr.Module
-}
+       // If this is a resource that participates in custom condition checks
+       // (i.e. it has preconditions or postconditions) then the check state
+       // wants to know the addresses of the checkable objects so that it can
+       // treat them as unknown status if we encounter an error before actually
+       // visiting the checks.
+       if checkState := ctx.Checks(); 
checkState.ConfigHasChecks(n.NodeAbstractResource.Addr) {
+               checkState.ReportCheckableObjects(n.NodeAbstractResource.Addr, 
instAddrs)
+       }
 
-func (n *NodePlannableResource) Name() string {
-       return n.Addr.String()
+       return &g, diags.ErrWithWarnings()
 }
 
-// GraphNodeExecutable
-func (n *NodePlannableResource) Execute(ctx EvalContext, op walkOperation) 
tfdiags.Diagnostics {
+// expandResourceInstances calculates the dynamic expansion for the resource
+// itself in the context of a particular module instance.
+//
+// It has several side-effects:
+//   - Adds a node to Graph g for each leaf resource instance it discovers, 
whether present or orphaned.
+//   - Registers the expansion of the resource in the "expander" object 
embedded inside EvalContext ctx.
+//   - Adds each present (non-orphaned) resource instance address to instAddrs 
(guaranteed to always be addrs.AbsResourceInstance, despite being declared as 
addrs.Checkable).
+//
+// After calling this for each of the module instances the resource appears
+// within, the caller must register the final superset instAddrs with the
+// checks subsystem so that it knows the fully expanded set of checkable
+// object instances for this resource instance.
+func (n *nodeExpandPlannableResource) expandResourceInstances(globalCtx 
EvalContext, resAddr addrs.AbsResource, g *Graph, instAddrs 
addrs.Set[addrs.Checkable]) error {
        var diags tfdiags.Diagnostics
 
        if n.Config == nil {
                // Nothing to do, then.
-               log.Printf("[TRACE] NodeApplyableResource: no configuration 
present for %s", n.Name())
-               return diags
+               log.Printf("[TRACE] nodeExpandPlannableResource: no 
configuration present for %s", n.Name())
+               return diags.ErrWithWarnings()
        }
 
+       // The rest of our work here needs to know which module instance it's
+       // working in, so that it can evaluate expressions in the appropriate 
scope.
+       moduleCtx := globalCtx.WithPath(resAddr.Module)
+
        // writeResourceState is responsible for informing the expander of what
        // repetition mode this resource has, which allows 
expander.ExpandResource
        // to work below.
-       moreDiags := n.writeResourceState(ctx, n.Addr)
+       moreDiags := n.writeResourceState(moduleCtx, resAddr)
        diags = diags.Append(moreDiags)
        if moreDiags.HasErrors() {
-               return diags
+               return diags.ErrWithWarnings()
        }
 
        // Before we expand our resource into potentially many resource 
instances,
@@ -223,8 +213,8 @@
        // consistent with the repetition mode of the resource. In other words,
        // we're aiming to catch a situation where naming a particular resource
        // instance would require an instance key but the given address has 
none.
-       expander := ctx.InstanceExpander()
-       instanceAddrs := 
expander.ExpandResource(n.ResourceAddr().Absolute(ctx.Path()))
+       expander := moduleCtx.InstanceExpander()
+       instanceAddrs := expander.ExpandResource(resAddr)
 
        // If there's a number of instances other than 1 then we definitely need
        // an index.
@@ -279,60 +269,42 @@
                }
        }
        // NOTE: The actual interpretation of n.forceReplace to produce replace
-       // actions is in NodeAbstractResourceInstance.plan, because we must do 
so
-       // on a per-instance basis rather than for the whole resource.
-
-       return diags
-}
-
-// GraphNodeDestroyerCBD
-func (n *NodePlannableResource) CreateBeforeDestroy() bool {
-       if n.ForceCreateBeforeDestroy != nil {
-               return *n.ForceCreateBeforeDestroy
-       }
+       // actions is in the per-instance function we're about to call, because
+       // we need to evaluate it on a per-instance basis.
 
-       // If we have no config, we just assume no
-       if n.Config == nil || n.Config.Managed == nil {
-               return false
+       for _, addr := range instanceAddrs {
+               // If this resource is participating in the "checks" mechanism 
then our
+               // caller will need to know all of our expanded instance 
addresses as
+               // checkable object instances.
+               // (NOTE: instAddrs probably already has other instance 
addresses in it
+               // from earlier calls to this function with different resource 
addresses,
+               // because its purpose is to aggregate them all together into a 
single set.)
+               instAddrs.Add(addr)
+       }
+
+       // Our graph builder mechanism expects to always be constructing new
+       // graphs rather than adding to existing ones, so we'll first
+       // construct a subgraph just for this individual modules's instances and
+       // then we'll steal all of its nodes and edges to incorporate into our
+       // main graph which contains all of the resource instances together.
+       instG, err := n.resourceInstanceSubgraph(moduleCtx, resAddr, 
instanceAddrs)
+       if err != nil {
+               diags = diags.Append(err)
+               return diags.ErrWithWarnings()
        }
+       g.Subsume(&instG.AcyclicGraph.Graph)
 
-       return n.Config.Managed.CreateBeforeDestroy
-}
-
-// GraphNodeDestroyerCBD
-func (n *NodePlannableResource) ModifyCreateBeforeDestroy(v bool) error {
-       n.ForceCreateBeforeDestroy = &v
-       return nil
+       return diags.ErrWithWarnings()
 }
 
-// GraphNodeDynamicExpandable
-func (n *NodePlannableResource) DynamicExpand(ctx EvalContext) (*Graph, error) 
{
+func (n *nodeExpandPlannableResource) resourceInstanceSubgraph(ctx 
EvalContext, addr addrs.AbsResource, instanceAddrs []addrs.AbsResourceInstance) 
(*Graph, error) {
        var diags tfdiags.Diagnostics
 
-       // Our instance expander should already have been informed about the
-       // expansion of this resource and of all of its containing modules, so
-       // it can tell us which instance addresses we need to process.
-       expander := ctx.InstanceExpander()
-       instanceAddrs := 
expander.ExpandResource(n.ResourceAddr().Absolute(ctx.Path()))
-
        // Our graph transformers require access to the full state, so we'll
        // temporarily lock it while we work on this.
        state := ctx.State().Lock()
        defer ctx.State().Unlock()
 
-       // If this is a resource that participates in custom condition checks
-       // (i.e. it has preconditions or postconditions) then the check state
-       // wants to know the addresses of the checkable objects so that it can
-       // treat them as unknown status if we encounter an error before actually
-       // visiting the checks.
-       if checkState := ctx.Checks(); 
checkState.ConfigHasChecks(n.NodeAbstractResource.Addr) {
-               checkableAddrs := addrs.MakeSet[addrs.Checkable]()
-               for _, addr := range instanceAddrs {
-                       checkableAddrs.Add(addr)
-               }
-               checkState.ReportCheckableObjects(n.NodeAbstractResource.Addr, 
checkableAddrs)
-       }
-
        // The concrete resource factory we'll use
        concreteResource := func(a *NodeAbstractResourceInstance) dag.Vertex {
                // check if this node is being imported first
@@ -397,7 +369,7 @@
                // Add the count/for_each orphans
                &OrphanResourceInstanceCountTransformer{
                        Concrete:      concreteResourceOrphan,
-                       Addr:          n.Addr,
+                       Addr:          addr,
                        InstanceAddrs: instanceAddrs,
                        State:         state,
                },
@@ -418,8 +390,8 @@
        // Build the graph
        b := &BasicGraphBuilder{
                Steps: steps,
-               Name:  "NodePlannableResource",
+               Name:  "nodeExpandPlannableResource",
        }
-       graph, diags := b.Build(ctx.Path())
+       graph, diags := b.Build(addr.Module)
        return graph, diags.ErrWithWarnings()
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/node_resource_plan_test.go 
new/terraform-1.3.1/internal/terraform/node_resource_plan_test.go
--- old/terraform-1.3.0/internal/terraform/node_resource_plan_test.go   
2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/node_resource_plan_test.go   
1970-01-01 01:00:00.000000000 +0100
@@ -1,63 +0,0 @@
-package terraform
-
-import (
-       "testing"
-
-       "github.com/hashicorp/terraform/internal/addrs"
-       "github.com/hashicorp/terraform/internal/configs"
-       "github.com/hashicorp/terraform/internal/instances"
-       "github.com/hashicorp/terraform/internal/states"
-)
-
-func TestNodePlannableResourceExecute(t *testing.T) {
-       state := states.NewState()
-       ctx := &MockEvalContext{
-               StateState:               state.SyncWrapper(),
-               InstanceExpanderExpander: instances.NewExpander(),
-       }
-
-       t.Run("no config", func(t *testing.T) {
-               node := NodePlannableResource{
-                       NodeAbstractResource: &NodeAbstractResource{
-                               Config: nil,
-                       },
-                       Addr: mustAbsResourceAddr("test_instance.foo"),
-               }
-               diags := node.Execute(ctx, walkApply)
-               if diags.HasErrors() {
-                       t.Fatalf("unexpected error: %s", diags.Err())
-               }
-               if !state.Empty() {
-                       t.Fatalf("expected no state, got:\n %s", state.String())
-               }
-       })
-
-       t.Run("simple", func(t *testing.T) {
-
-               node := NodePlannableResource{
-                       NodeAbstractResource: &NodeAbstractResource{
-                               Config: &configs.Resource{
-                                       Mode: addrs.ManagedResourceMode,
-                                       Type: "test_instance",
-                                       Name: "foo",
-                               },
-                               ResolvedProvider: addrs.AbsProviderConfig{
-                                       Provider: 
addrs.NewDefaultProvider("test"),
-                                       Module:   addrs.RootModule,
-                               },
-                       },
-                       Addr: mustAbsResourceAddr("test_instance.foo"),
-               }
-               diags := node.Execute(ctx, walkApply)
-               if diags.HasErrors() {
-                       t.Fatalf("unexpected error: %s", diags.Err())
-               }
-               if state.Empty() {
-                       t.Fatal("expected resources in state, got empty state")
-               }
-               r := state.Resource(mustAbsResourceAddr("test_instance.foo"))
-               if r == nil {
-                       t.Fatal("test_instance.foo not found in state")
-               }
-       })
-}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/testdata/plan-ignore-changes-wildcard/main.tf
 
new/terraform-1.3.1/internal/terraform/testdata/plan-ignore-changes-wildcard/main.tf
--- 
old/terraform-1.3.0/internal/terraform/testdata/plan-ignore-changes-wildcard/main.tf
        2022-09-21 15:40:32.000000000 +0200
+++ 
new/terraform-1.3.1/internal/terraform/testdata/plan-ignore-changes-wildcard/main.tf
        2022-09-28 15:46:33.000000000 +0200
@@ -5,6 +5,7 @@
 resource "aws_instance" "foo" {
   ami      = "${var.foo}"
   instance = "${var.bar}"
+  foo = "bar"
 
   lifecycle {
     ignore_changes = all
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/transform_destroy_edge.go 
new/terraform-1.3.1/internal/terraform/transform_destroy_edge.go
--- old/terraform-1.3.0/internal/terraform/transform_destroy_edge.go    
2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/transform_destroy_edge.go    
2022-09-28 15:46:33.000000000 +0200
@@ -39,6 +39,63 @@
 // still subnets.
 type DestroyEdgeTransformer struct{}
 
+// tryInterProviderDestroyEdge checks if we're inserting a destroy edge
+// across a provider boundary, and only adds the edge if it results in no 
cycles.
+//
+// FIXME: The cycles can arise in valid configurations when a provider depends
+// on resources from another provider. In the future we may want to inspect
+// the dependencies of the providers themselves, to avoid needing to use the
+// blunt hammer of checking for cycles.
+//
+// A reduced example of this dependency problem looks something like:
+/*
+
+createA <-               createB
+  |        \            /    |
+  |         providerB <-     |
+  v                     \    v
+destroyA ------------->  destroyB
+
+*/
+//
+// The edge from destroyA to destroyB would be skipped in this case, but there
+// are still other combinations of changes which could connect the A and B
+// groups around providerB in various ways.
+//
+// The most difficult problem here happens during a full destroy operation.
+// That creates a special case where resources on which a provider depends must
+// exist for evaluation before they are destroyed. This means that any provider
+// dependencies must wait until all that provider's resources have first been
+// destroyed. This is where these cross-provider edges are still required to
+// ensure the correct order.
+func (t *DestroyEdgeTransformer) tryInterProviderDestroyEdge(g *Graph, from, 
to dag.Vertex) {
+       e := dag.BasicEdge(from, to)
+       g.Connect(e)
+
+       pc, ok := from.(GraphNodeProviderConsumer)
+       if !ok {
+               return
+       }
+       fromProvider := pc.Provider()
+
+       pc, ok = to.(GraphNodeProviderConsumer)
+       if !ok {
+               return
+       }
+       toProvider := pc.Provider()
+
+       sameProvider := fromProvider.Equals(toProvider)
+
+       // Check for cycles, and back out the edge if there are any.
+       // The cycles we are looking for only appears between providers, so 
don't
+       // waste time checking for cycles if both nodes use the same provider.
+       if !sameProvider && len(g.Cycles()) > 0 {
+               log.Printf("[DEBUG] DestroyEdgeTransformer: skipping 
inter-provider edge %s->%s which creates a cycle",
+                       dag.VertexName(from), dag.VertexName(to))
+               g.RemoveEdge(e)
+       }
+}
+
 func (t *DestroyEdgeTransformer) Transform(g *Graph) error {
        // Build a map of what is being destroyed (by address string) to
        // the list of destroyers.
@@ -93,7 +150,7 @@
                                for _, desDep := range 
destroyersByResource[resAddr.String()] {
                                        if 
!graphNodesAreResourceInstancesInDifferentInstancesOfSameModule(desDep, des) {
                                                log.Printf("[TRACE] 
DestroyEdgeTransformer: %s has stored dependency of %s\n", 
dag.VertexName(desDep), dag.VertexName(des))
-                                               g.Connect(dag.BasicEdge(desDep, 
des))
+                                               
t.tryInterProviderDestroyEdge(g, desDep, des)
                                        } else {
                                                log.Printf("[TRACE] 
DestroyEdgeTransformer: skipping %s => %s inter-module-instance dependency\n", 
dag.VertexName(desDep), dag.VertexName(des))
                                        }
@@ -105,7 +162,7 @@
                                for _, createDep := range 
creators[resAddr.String()] {
                                        if 
!graphNodesAreResourceInstancesInDifferentInstancesOfSameModule(createDep, des) 
{
                                                log.Printf("[DEBUG] 
DestroyEdgeTransformer: %s has stored dependency of %s\n", 
dag.VertexName(createDep), dag.VertexName(des))
-                                               
g.Connect(dag.BasicEdge(createDep, des))
+                                               
t.tryInterProviderDestroyEdge(g, createDep, des)
                                        } else {
                                                log.Printf("[TRACE] 
DestroyEdgeTransformer: skipping %s => %s inter-module-instance dependency\n", 
dag.VertexName(createDep), dag.VertexName(des))
                                        }
@@ -170,9 +227,18 @@
 // closers also need to disable their use of expansion if the module itself is
 // no longer present.
 type pruneUnusedNodesTransformer struct {
+       // The plan graph builder will skip this transformer except during a 
full
+       // destroy. Planing normally involves all nodes, but during a destroy 
plan
+       // we may need to prune things which are in the configuration but do not
+       // exist in state to evaluate.
+       skip bool
 }
 
 func (t *pruneUnusedNodesTransformer) Transform(g *Graph) error {
+       if t.skip {
+               return nil
+       }
+
        // We need a reverse depth first walk of modules, processing them in 
order
        // from the leaf modules to the root. This allows us to remove unneeded
        // dependencies from child modules, freeing up nodes in the parent 
module
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/internal/terraform/transform_expand.go 
new/terraform-1.3.1/internal/terraform/transform_expand.go
--- old/terraform-1.3.0/internal/terraform/transform_expand.go  2022-09-21 
15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/transform_expand.go  2022-09-28 
15:46:33.000000000 +0200
@@ -5,5 +5,13 @@
 // These nodes are given the eval context and are expected to return
 // a new subgraph.
 type GraphNodeDynamicExpandable interface {
+       // DynamicExpand returns a new graph which will be treated as the 
dynamic
+       // subgraph of the receiving node.
+       //
+       // The second return value is of type error for historical reasons;
+       // it's valid (and most ideal) for DynamicExpand to return the result
+       // of calling ErrWithWarnings on a tfdiags.Diagnostics value instead,
+       // in which case the caller will unwrap it and gather the individual
+       // diagnostics.
        DynamicExpand(EvalContext) (*Graph, error)
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/terraform-1.3.0/internal/terraform/transform_root.go 
new/terraform-1.3.1/internal/terraform/transform_root.go
--- old/terraform-1.3.0/internal/terraform/transform_root.go    2022-09-21 
15:40:32.000000000 +0200
+++ new/terraform-1.3.1/internal/terraform/transform_root.go    2022-09-28 
15:46:33.000000000 +0200
@@ -15,11 +15,21 @@
                return nil
        }
 
-       // Add a root
+       // We intentionally add a graphNodeRoot value -- rather than a pointer 
to
+       // one -- so that all root nodes will coalesce together if two graphs
+       // are merged. Each distinct node value can only be in a graph once,
+       // so adding another graphNodeRoot value to the same graph later will
+       // be a no-op and all of the edges from root nodes will coalesce 
together
+       // under Graph.Subsume.
+       //
+       // It's important to retain this coalescing guarantee under future
+       // maintenence.
        var root graphNodeRoot
        g.Add(root)
 
-       // Connect the root to all the edges that need it
+       // We initially make the root node depend on every node except itself.
+       // If the caller subsequently runs transitive reduction on the graph 
then
+       // it's typical for some of these edges to then be removed.
        for _, v := range g.Vertices() {
                if v == root {
                        continue
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/terraform-1.3.0/version/version.go 
new/terraform-1.3.1/version/version.go
--- old/terraform-1.3.0/version/version.go      2022-09-21 15:40:32.000000000 
+0200
+++ new/terraform-1.3.1/version/version.go      2022-09-28 15:46:33.000000000 
+0200
@@ -11,7 +11,7 @@
 )
 
 // The main version number that is being run at the moment.
-var Version = "1.3.0"
+var Version = "1.3.1"
 
 // A pre-release marker for the version. If this is "" (empty string)
 // then it means that it is a final release. Otherwise, this is a pre-release
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/website/docs/cli/workspaces/index.mdx 
new/terraform-1.3.1/website/docs/cli/workspaces/index.mdx
--- old/terraform-1.3.0/website/docs/cli/workspaces/index.mdx   2022-09-21 
15:40:32.000000000 +0200
+++ new/terraform-1.3.1/website/docs/cli/workspaces/index.mdx   2022-09-28 
15:46:33.000000000 +0200
@@ -7,72 +7,80 @@
 
 # Managing Workspaces
 
-In Terraform CLI, _workspaces_ are separate instances of
-[state data](/language/state) that can be used from the same working
-directory. You can use workspaces to manage multiple non-overlapping groups of
-resources with the same configuration.
-
-- Every [initialized working directory](/cli/init) has at least
-  one workspace. (If you haven't created other workspaces, it is a workspace
-  named `default`.)
-- For a given working directory, only one workspace can be _selected_ at a 
time.
-- Most Terraform commands (including [provisioning](/cli/run)
-  and [state manipulation](/cli/state) commands) only interact
-  with the currently selected workspace.
-- Use [the `terraform workspace select` 
command](/cli/commands/workspace/select)
-  to change the currently selected workspace.
-- Use the [`terraform workspace list`](/cli/commands/workspace/list),
-  [`terraform workspace new`](/cli/commands/workspace/new), and
-  [`terraform workspace delete`](/cli/commands/workspace/delete) commands
-  to manage the available workspaces in the current working directory.
-
--> **Note:** Terraform Cloud and Terraform CLI both have features called
-"workspaces," but they're slightly different. Terraform Cloud's workspaces
-behave more like completely separate working directories.
-
-## The Purpose of Workspaces
-
-Since most of the resources you can manage with Terraform don't include a 
unique
-name as part of their configuration, it's common to use the same Terraform
-configuration to provision multiple groups of similar resources.
-
-Terraform relies on [state](/language/state) to associate resources with
-real-world objects, so if you run the same configuration multiple times with
-completely separate state data, Terraform can manage many non-overlapping 
groups
-of resources. In some cases you'll want to change
-[variable values](/language/values/variables) for these different
-resource collections (like when specifying differences between staging and
-production deployments), and in other cases you might just want many instances
-of a particular infrastructure pattern.
-
-The simplest way to maintain multiple instances of a configuration with
-completely separate state data is to use multiple
-[working directories](/cli/init) (with different
-[backend](/language/settings/backends/configuration) configurations per 
directory, if you
-aren't using the default `local` backend).
-
-However, this isn't always the most _convenient_ way to handle separate states.
-Terraform installs a separate cache of plugins and modules for each working
-directory, so maintaining multiple directories can waste bandwidth and disk
-space. You must also update your configuration code from version control
-separately for each directory, reinitialize each directory separately when
-changing the configuration, etc.
-
-Workspaces allow you to use the same working copy of your configuration and the
-same plugin and module caches, while still keeping separate states for each
-collection of resources you manage.
+Workspaces in the Terraform CLI refer to separate instances of [state 
data](/language/state) inside the same Terraform working directory. They are 
distinctly different from [workspaces in Terraform 
Cloud](/cloud-docs/workspaces), which each have their own Terraform 
configuration and function as separate working directories.
+
+Terraform relies on state to associate resources with real-world objects. When 
you run the same configuration multiple times with separate state data, 
Terraform can manage multiple sets of non-overlapping resources.
+
+Workspaces can be helpful for specific [use cases](#use-cases), but they are 
not required to use the Terraform CLI. We recommend using [alternative 
approaches](#alternatives-to-workspaces) for complex deployments requiring 
separate credentials and access controls.
+
+
+## Managing CLI Workspaces
+
+Every [initialized working directory](/cli/init) starts with one workspace 
named `default`.
+
+Use the [`terraform workspace list`](/cli/commands/workspace/list), 
[`terraform workspace new`](/cli/commands/workspace/new), and [`terraform 
workspace delete`](/cli/commands/workspace/delete) commands to manage the 
available workspaces in the current working directory.
+
+Use [the `terraform workspace select` command](/cli/commands/workspace/select) 
to change the currently selected workspace. For a given working directory, you 
can only select one workspace can be at a time. Most Terraform commands only 
interact with the currently selected workspace. This includes 
[provisioning](/cli/run) and [state manipulation](/cli/state).
+
+When you provision infrastructure in each workspace, you usually need to 
manually specify different [input variables](/language/values/variables) to 
differentiate each collection. For example, you might deploy test 
infrastructure to a different region.
+
+
+## Use Cases
+
+You can create multiple [working directories](/cli/init) to maintain multiple 
instances of a configuration with completely separate state data. However, 
Terraform installs a separate cache of plugins and modules for each working 
directory, so maintaining multiple directories can waste bandwidth and disk 
space. This approach also requires extra tasks like updating configuration from 
version control for each directory separately and reinitializing each directory 
when you change the configuration. Workspaces are convenient because they let 
you create different sets of infrastructure with the same working copy of your 
configuration and the same plugin and module caches.
+
+A common use for multiple workspaces is to create a parallel, distinct copy of
+a set of infrastructure to test a set of changes before modifying production 
infrastructure.
+
+Non-default workspaces are often related to feature branches in version 
control.
+The default workspace might correspond to the `main` or `trunk` branch, which 
describes the intended state of production infrastructure. When a developer 
creates a feature branch for a change, they might also create a corresponding 
workspace and deploy into it a temporary copy of the main infrastructure. They 
can then test changes on the copy without affecting the production 
infrastructure. Once the change is merged and deployed to the default 
workspace, they destroy the test infrastructure and delete the temporary 
workspace.
+
+
+### When Not to Use Multiple Workspaces
+
+Workspaces let you quickly switch between multiple instances of a **single 
configuration** within its **single backend**. They are not designed to solve 
all problems.
+
+When using Terraform to manage larger systems, you should create separate 
Terraform configurations that correspond to architectural boundaries within the 
system. This lets teams manage different components separately. Workspaces 
alone are not a suitable tool for system decomposition because each subsystem 
should have its own separate configuration and backend.
+
+In particular, organizations commonly want to create a strong separation
+between multiple deployments of the same infrastructure serving different
+development stages or different internal teams. In this case, the backend for 
each deployment often has different credentials and access controls. CLI 
workspaces within a working directory use the same backend, so they are not a 
suitable isolation mechanism for this scenario.
+
+## Alternatives to Workspaces
+
+Instead of creating CLI workspaces, you can use one or more [re-usable 
modules](/language/modules/develop) to represent the common elements and then 
represent each instance as a separate configuration that instantiates those 
common elements in the context of a different 
[backend](/language/settings/backends/configuration). The root module of each 
configuration consists only of a backend configuration and a small number of 
`module` blocks with arguments describing any small differences between the 
deployments.
+
+When multiple configurations represent distinct system components rather than 
multiple deployments, you can pass data from one component to another using 
paired resources types and data sources.
+
+- When a shared [Consul](https://www.consul.io/) cluster is available, use 
[`consul_key_prefix`](https://registry.terraform.io/providers/hashicorp/consul/latest/docs/resources/key_prefix)
 to publish to the key/value store and 
[`consul_keys`](https://registry.terraform.io/providers/hashicorp/consul/latest/docs/data-sources/keys)
 to retrieve those values in other configurations.
+
+- In systems that support user-defined labels or tags, use a tagging 
convention to make resources automatically discoverable. For example, use [the 
`aws_vpc` resource 
type](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc)
 to assign suitable tags and then [the `aws_vpc` data 
source](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/vpc)
 to query by those tags in other configurations.
+
+- For server addresses, use a provider-specific resource to create a DNS 
record with a predictable name. Then you can either use that name directly or 
use [the `dns` 
provider](https://registry.terraform.io/providers/hashicorp/dns/latest/docs) to 
retrieve the published addresses in other configurations.
+
+- If you store a Terraform state for one configuration in a remote backend 
that other configurations can access, then the other configurations can use 
[`terraform_remote_state`](/language/state/remote-state-data) to directly 
consume its root module outputs. This setup creates a tighter coupling between 
configurations, and the root configuration does not need to publish its results 
in a separate system.
+
 
 ## Interactions with Terraform Cloud Workspaces
 
 Terraform Cloud organizes infrastructure using workspaces, but its workspaces
-act more like completely separate working directories; each Terraform Cloud
+act more like completely separate working directories. Each Terraform Cloud
 workspace has its own Terraform configuration, set of variable values, state
 data, run history, and settings.
 
-These two kinds of workspaces are different, but related. When [using Terraform
-CLI as a frontend for Terraform Cloud](/cli/cloud), you can associate the 
current working
-directory with one or more remote workspaces. If you associate the
-directory with multiple workspaces (using workspace tags), you can use the
-`terraform workspace` commands to select which remote workspace to use.
+When you [integrate Terraform CLI with Terraform Cloud](/cli/cloud), you can 
associate the current CLI working directory with one or more remote Terraform 
Cloud workspaces. Then, use the `terraform workspace` commands to select the 
remote workspace you want to use for each run.
+
+Refer to [CLI-driven Runs](/cloud-docs/run/cli) in the Terraform Cloud 
documentation for more details.
+
+
+## Workspace Internals
+
+Workspaces are technically equivalent to renaming your state file. Terraform 
then includes a set of protections and support for remote state.
+
+Workspaces are also meant to be a shared resource. They are not private, 
unless you use purely local state and do not commit your state to version 
control.
+
+For local state, Terraform stores the workspace states in a directory called 
`terraform.tfstate.d`. This directory should be treated similarly to local-only 
`terraform.tfstate`. Some teams commit these files to version control, but we 
recommend using a remote backend instead when there are multiple collaborators.
+
+For [remote state](/language/state/remote), the workspaces are stored directly 
in the configured [backend](/language/settings/backends). For example, if you 
use [Consul](/language/settings/backends/consul), the workspaces are stored by 
appending the workspace name to the state path. To ensure that workspace names 
are stored correctly and safely in all backends, the name must be valid to use 
in a URL path segment without escaping.
 
-Refer to [CLI-driven Runs](/cloud-docs/run/cli) in the Terraform Cloud 
documentation for more details about using Terraform CLI with Terraform Cloud.
+Terraform stores the current workspace name locally in the ignored 
`.terraform` directory. This allows multiple team members to work on different 
workspaces concurrently. Workspace names are also attached to associated remote 
workspaces in Terraform Cloud. For more details about workspace names in 
Terraform Cloud, refer to the [CLI Integration 
(recommended)](/cli/cloud/settings#arguments) and [remote 
backend](/language/settings/backends/remote#workspaces) and  documentation.
\ No newline at end of file
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/website/docs/language/functions/startswith.mdx 
new/terraform-1.3.1/website/docs/language/functions/startswith.mdx
--- old/terraform-1.3.0/website/docs/language/functions/startswith.mdx  
2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/website/docs/language/functions/startswith.mdx  
2022-09-28 15:46:33.000000000 +0200
@@ -1,5 +1,5 @@
 ---
-page_title: startsswith - Functions - Configuration Language
+page_title: startswith - Functions - Configuration Language
 description: |-
   The startswith function  takes two values: a string to check and a prefix 
string. It returns true if the string begins with that exact prefix.
 ---
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/terraform-1.3.0/website/docs/language/state/workspaces.mdx 
new/terraform-1.3.1/website/docs/language/state/workspaces.mdx
--- old/terraform-1.3.0/website/docs/language/state/workspaces.mdx      
2022-09-21 15:40:32.000000000 +0200
+++ new/terraform-1.3.1/website/docs/language/state/workspaces.mdx      
2022-09-28 15:46:33.000000000 +0200
@@ -7,71 +7,39 @@
 
 # Workspaces
 
-Each Terraform configuration has an associated 
[backend](/language/settings/backends)
-that defines how operations are executed and where persistent data such as
-[the Terraform state](/language/state/purpose) are
-stored.
-
-The persistent data stored in the backend belongs to a _workspace_. Initially
-the backend has only one workspace, called "default", and thus there is only
-one Terraform state associated with that configuration.
-
-Certain backends support _multiple_ named workspaces, allowing multiple states
-to be associated with a single configuration. The configuration still
-has only one backend, but multiple distinct instances of that configuration
-to be deployed without configuring a new backend or changing authentication
+Each Terraform configuration has an associated 
[backend](/language/settings/backends) that defines how Terraform executes 
operations and where Terraform stores persistent data, like 
[state](/language/state/purpose).
+
+The persistent data stored in the backend belongs to a workspace. The backend 
initially has only one workspace containing one Terraform state associated with 
that configuration. Some backends support multiple named workspaces, allowing 
multiple states to be associated with a single configuration. The configuration 
still has only one backend, but you can deploy multiple distinct instances of 
that configuration without configuring a new backend or changing authentication
 credentials.
 
-Multiple workspaces are currently supported by the following backends:
+-> **Note**: The Terraform CLI workspaces are different from [workspaces in 
Terraform Cloud](/cloud-docs/workspaces). Refer to [Initializing and 
Migrating](/cli/cloud/migrating) for details about migrating a configuration 
with multiple workspaces to Terraform Cloud.
+
+## Backends Supporting Multiple Workspaces
+
+You can use multiple workspaces with the following backends:
+
+- [AzureRM](/language/settings/backends/azurerm)
+- [Consul](/language/settings/backends/consul)
+- [COS](/language/settings/backends/cos)
+- [GCS](/language/settings/backends/gcs)
+- [Kubernetes](/language/settings/backends/kubernetes)
+- [Local](/language/settings/backends/local)
+- [OSS](/language/settings/backends/oss)
+- [Postgres](/language/settings/backends/pg)
+- [Remote](/language/settings/backends/remote)
+- [S3](/language/settings/backends/s3)
 
-* [AzureRM](/language/settings/backends/azurerm)
-* [Consul](/language/settings/backends/consul)
-* [COS](/language/settings/backends/cos)
-* [GCS](/language/settings/backends/gcs)
-* [Kubernetes](/language/settings/backends/kubernetes)
-* [Local](/language/settings/backends/local)
-* [OSS](/language/settings/backends/oss)
-* [Postgres](/language/settings/backends/pg)
-* [Remote](/language/settings/backends/remote)
-* [S3](/language/settings/backends/s3)
-
-In the 0.9 line of Terraform releases, this concept was known as "environment".
-It was renamed in 0.10 based on feedback about confusion caused by the
-overloading of the word "environment" both within Terraform itself and within
-organizations that use Terraform.
-
--> **Note**: The Terraform CLI workspace concept described in this document is
-different from but related to the Terraform Cloud
-[workspace](/cloud-docs/workspaces) concept.
-If you use multiple Terraform CLI workspaces in a single Terraform 
configuration
-and are migrating that configuration to Terraform Cloud, refer to 
[Initializing and Migrating](/cli/cloud/migrating).
 
 ## Using Workspaces
 
-Terraform starts with a single workspace named "default". This
-workspace is special both because it is the default and also because
-it cannot ever be deleted. If you've never explicitly used workspaces, then
-you've only ever worked on the "default" workspace.
-
-Workspaces are managed with the `terraform workspace` set of commands. To
-create a new workspace and switch to it, you can use `terraform workspace new`;
-to switch workspaces you can use `terraform workspace select`; etc.
-
-For example, creating a new workspace:
-
-```text
-$ terraform workspace new bar
-Created and switched to workspace "bar"!
-
-You're now on a new, empty workspace. Workspaces isolate their state,
-so if you run "terraform plan" Terraform will not see any existing state
-for this configuration.
-```
+~> **Important:** Workspaces are not appropriate for system decomposition or 
deployments requiring separate credentials and access controls. Refer to [Use 
Cases](/cli/workspaces#use-cases) in the Terraform CLI documentation for 
details and recommended alternatives.
+
+Terraform starts with a single, default workspace named `default` that you 
cannot delete. If you have not created a new workspace, you are using the 
default workspace in your Terraform working directory.
+
+When you run `terraform plan` in a new workspace, Terraform does not access 
existing resources in other workspaces. These resources still physically exist, 
but you must switch workspaces to manage them.
+
+Refer to the [Terraform CLI workspaces](/cli/workspaces) documentation for 
full details about how to create and use workspaces.
 
-As the command says, if you run `terraform plan`, Terraform will not see
-any existing resources that existed on the default (or any other) workspace.
-**These resources still physically exist,** but are managed in another
-Terraform workspace.
 
 ## Current Workspace Interpolation
 
@@ -103,103 +71,3 @@
   # ... other arguments
 }
 ```
-
-## When to use Multiple Workspaces
-
-Named workspaces allow conveniently switching between multiple instances of
-a _single_ configuration within its _single_ backend. They are convenient in
-a number of situations, but cannot solve all problems.
-
-A common use for multiple workspaces is to create a parallel, distinct copy of
-a set of infrastructure in order to test a set of changes before modifying the
-main production infrastructure. For example, a developer working on a complex
-set of infrastructure changes might create a new temporary workspace in order
-to freely experiment with changes without affecting the default workspace.
-
-Non-default workspaces are often related to feature branches in version 
control.
-The default workspace might correspond to the "main" or "trunk" branch,
-which describes the intended state of production infrastructure. When a
-feature branch is created to develop a change, the developer of that feature
-might create a corresponding workspace and deploy into it a temporary "copy"
-of the main infrastructure so that changes can be tested without affecting
-the production infrastructure. Once the change is merged and deployed to the
-default workspace, the test infrastructure can be destroyed and the temporary
-workspace deleted.
-
-When Terraform is used to manage larger systems, teams should use multiple
-separate Terraform configurations that correspond with suitable architectural
-boundaries within the system so that different components can be managed
-separately and, if appropriate, by distinct teams. Workspaces _alone_
-are not a suitable tool for system decomposition, because each subsystem should
-have its own separate configuration and backend, and will thus have its own
-distinct set of workspaces.
-
-In particular, organizations commonly want to create a strong separation
-between multiple deployments of the same infrastructure serving different
-development stages (e.g. staging vs. production) or different internal teams.
-In this case, the backend used for each deployment often belongs to that
-deployment, with different credentials and access controls. Named workspaces
-are _not_ a suitable isolation mechanism for this scenario.
-
-Instead, use one or more [re-usable modules](/language/modules/develop) to
-represent the common elements, and then represent each instance as a separate
-configuration that instantiates those common elements in the context of a
-different backend. In that case, the root module of each configuration will
-consist only of a backend configuration and a small number of `module` blocks
-whose arguments describe any small differences between the deployments.
-
-Where multiple configurations are representing distinct system components
-rather than multiple deployments, data can be passed from one component to
-another using paired resources types and data sources. For example:
-
-* Where a shared [Consul](https://www.consul.io/) cluster is available, use
-  
[`consul_key_prefix`](https://registry.terraform.io/providers/hashicorp/consul/latest/docs/resources/key_prefix)
 to
-  publish to the key/value store and 
[`consul_keys`](https://registry.terraform.io/providers/hashicorp/consul/latest/docs/data-sources/keys)
-  to retrieve those values in other configurations.
-
-* In systems that support user-defined labels or tags, use a tagging convention
-  to make resources automatically discoverable. For example, use
-  [the `aws_vpc` resource 
type](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc)
-  to assign suitable tags and then
-  [the `aws_vpc` data 
source](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/vpc)
-  to query by those tags in other configurations.
-
-* For server addresses, use a provider-specific resource to create a DNS
-  record with a predictable name and then either use that name directly or
-  use [the `dns` 
provider](https://registry.terraform.io/providers/hashicorp/dns/latest/docs) to 
retrieve
-  the published addresses in other configurations.
-
-* If a Terraform state for one configuration is stored in a remote backend
-  that is accessible to other configurations then
-  [`terraform_remote_state`](/language/state/remote-state-data)
-  can be used to directly consume its root module outputs from those other
-  configurations. This creates a tighter coupling between configurations,
-  but avoids the need for the "producer" configuration to explicitly
-  publish its results in a separate system.
-
-## Workspace Internals
-
-Workspaces are technically equivalent to renaming your state file. They
-aren't any more complex than that. Terraform wraps this simple notion with
-a set of protections and support for remote state.
-
-For local state, Terraform stores the workspace states in a directory called
-`terraform.tfstate.d`. This directory should be treated similarly to
-local-only `terraform.tfstate`; some teams commit these files to version
-control, although using a remote backend instead is recommended when there are
-multiple collaborators.
-
-For [remote state](/language/state/remote), the workspaces are stored
-directly in the configured [backend](/language/settings/backends). For 
example, if you
-use [Consul](/language/settings/backends/consul), the workspaces are stored
-by appending the workspace name to the state path. To ensure that
-workspace names are stored correctly and safely in all backends, the name
-must be valid to use in a URL path segment without escaping.
-
-The important thing about workspace internals is that workspaces are
-meant to be a shared resource. They aren't a private, local-only notion
-(unless you're using purely local state and not committing it).
-
-The "current workspace" name is stored locally in the ignored
-`.terraform` directory. This allows multiple team members to work on
-different workspaces concurrently. Workspace names are also attached to 
associated remote workspaces in Terraform Cloud. For more details about 
workspace names in Terraform Cloud, refer to the [remote 
backend](/language/settings/backends/remote#workspaces) and [CLI Integration 
(recommended)](/cli/cloud/settings#arguments) documentation.

++++++ terraform.obsinfo ++++++
--- /var/tmp/diff_new_pack.6a1BDT/_old  2022-09-30 17:58:52.521387248 +0200
+++ /var/tmp/diff_new_pack.6a1BDT/_new  2022-09-30 17:58:52.525387257 +0200
@@ -1,5 +1,5 @@
 name: terraform
-version: 1.3.0
-mtime: 1663767632
-commit: 5c239ecd6ac3a183bb4852940f2f2d1af1a766ce
+version: 1.3.1
+mtime: 1664372793
+commit: ebc1f295cafb53341f9b58c3c16099f98d3b58a6
 

++++++ vendor.tar.gz ++++++
/work/SRC/openSUSE:Factory/terraform/vendor.tar.gz 
/work/SRC/openSUSE:Factory/.terraform.new.2275/vendor.tar.gz differ: char 5, 
line 1

Reply via email to