In most applications it's possible to fully evaluate configuration at the
beginning and work only with resolved values after that, but in some
unusual cases it's necessary to split parsing and decoding between two
separate processes connected by a pipe or network connection.
hclpack is intended to provide compact wire formats for sending bodies
over the network such that they can be decoded and evaluated and get the
same results. This is not something that can happen fully automatically
because a hcl.Body is an abstract node rather than a physical construct,
and so access to the original source code is required to construct such
a representation, and to interpret any source ranges that emerged from
the final evaluation.
To reduce confusion with object attributes, we switched a while back to
using the term "argument" to refer to the key/value pairs in a body, but
these diagnostic messages were not updated.
These are wrappers around the lower-level hclwrite package that are able
to reverse a subset of the behavior of the Decode functions to populate
an hclwrite DOM.
They are not fully symmetrical with DecodeBody because that function can
leave certain parts of the configuration in its opaque form for later
decoding, and these encode functions don't have enough information to
repack that abstract/opaque form into new source code.
In practice we expect that callers using complex techniques like partial
decoding will also use more complex techniques with the hclwrite API
directly, since they will need to coordinate partial _encoding_ of data
that has been portioned off into separate structures, which gohcl is not
equipped to do itself.
This completes the minimal functionality for creating a new file from
scratch, rather than modifying an existing one. This is illustrated by
a new test TestRoundupCreate that uses the API to create a new file in a
similar way to how a calling application might.
There isn't any strong reason for this -- they don't implement io.Reader
and so can't be used in places where a Reader+WriterTo is expected, like
io.Copy -- but go lint thinks that anything called WriteTo with an
io.Writer argument is an attempt to implement WriterTo and so this just
shuts up the linter.
Since this function implicitly creates a new body, this name is more
appropriate and leaves the name "AppendBlock" open for a later method to
append an _existing_ block, such as when moving a block from one file
to another.
Invalid blocks now have empty bodies rather than nil bodies, to avoid the
need to callers to specially handle nils here when they are doing analysis
of a partially-invalid file.
This method allows a caller to generate a nested block within a body.
Since a nested block has its own content body, this now allows for deep
structures to be generated.
Our usual rule for parse errors is to return a valid-but-incomplete object
along with error diagnostics. This was violating that rule by returning
a nil child body, which callers do not expect to deal with.
Instead, we'll return an *empty* body, so that callers who use the partial
result for careful analyses can still process the block header, without
needing to guard against the body being nil.
This was always a bit of an outlier here because the rest of the API is
intentionally designed to encourage treating AST nodes as immutable.
Although the transforming walkers were functionally correct, they were
causing false positives in the race detector if two walks run concurrently.
We may later introduce something similar to this in the hclwrite package,
where the AST nodes are explicitly mutable.
There is precedent for allowing strings containing digits where numbers are expected,
and so this extends that to also allow for boolean values to be given as strings.
This applies only to callers going through the decoder API. Direct access via the AST
will reflect exactly what was given in the input configuration.
"HCL Configuration Language" will feel redundant to anyone who thinks
they know what "HCL" is supposed to stand for, even though our docs don't
actually expand the abbreviation at all. This new version is also still
redundant with that interpretation, but at least it emphasizes that it's
a toolkit for creating configuration languages rather than a configuration
language in its own right.
Git isn't available in the Read The Docs build, but we don't actually need
it there anyway because we are told the version number that Read The Docs
thinks it is building.
Although our underlying parse tree retains all of the token content, it
doesn't necessarily retain all of the spacing information under editing,
and so formatting on save ensures that we'll produce a canonical result
even if some edits have been applied that have changed the expected
alignment of objects, etc.
When nested attributes are of type cty.DynamicPseudoType, a block spec
that is backed by a cty collection is annoying to use because it requires
all of the blocks to have homogenous types for such attributes.
These new specs are similar to BlockListSpec and BlockMapSpec
respectively, but permit each nested block result to have its own distinct
type.
In return for this flexibility, we lose the ability to predict the exact
type of the result: these specs must just indicate their type as being
cty.DynamicPseudoType themselves, since we need to know how many blocks
there are and what types are inside them before we can know our final
result type.
Our BlockList, BlockSet, and BlockMap specs all produce cty collection
values, which require all elements to have a homogeneous type. If the
nested spec contained an attribute of type cty.DynamicPseudoType, that
would create the risk of each element having a different type, which would
previously have caused decoding to panic.
Now we either handle this during decode (BlockList, BlockSet) or forbid
it outright (BlockMap) to prevent that crash. BlockMap could _potentially_
also handle this during decode, but that would require a more significant
reorganization of its implementation than I want to take on right now,
and decoding dynamically-typed values inside collections is an edge case
anyway.
However, since this repository is only a temporary home for this module,
the go.mod file carries a warning to that effect. It will move to its more
permanent home as it transitions from experimental to released.
Although the spec testsuite and associated harness is designed to be
usable by other implementations of HCL not written in Go, it's convenient
to run it as part of our own "go test" test suite here so there isn't
an additional thing to run on each change.
To achieve this, the new package hcl/spectests will build both hcldec and
hclspecsuite from latest source and then run the latter to execute the
test suite, capturing the output and converting it (sloppily) into
testing.T method calls to produce something vaguely reasonable.
Other than the small amount of "parsing" to make it look in the output
like a normal Go test, there's nothing special going on here and so it's
still valid to run the spec suite manually with a build of hcldec from
this codebase, which should produce the same result.
When a test file declares one or more expected diagnostics, we check those
instead of checking the result value. The severities and source ranges
must match.
We don't test the error messages themselves because they are not part of
the specification and may vary between implementations or, in future, be
translated into other languages.