The try(...) and can(...) functions are intended to make it more
convenient to work with deep data structures of unknown shape, by allowing
a caller to concisely try a complex traversal operation against a value
without having to guard against each possible failure mode individually.
These rely on the customdecode extension to get access to their argument
expressions directly, rather than only the results of evaluating those
expressions. The expressions can then be evaluated in a controlled manner
so that any resulting errors can be recognized and suppressed as
appropriate.
Most of the time, the standard expression decoding built in to HCL is
sufficient. Sometimes though, it's useful to be able to customize the
decoding of certain arguments where the application intends to use them
in a very specific way, such as in static analysis.
This extension is an approximate analog of gohcl's support for decoding
into an hcl.Expression, allowing hcldec-based applications and
applications with custom functions to similarly capture and manipulate
the physical expressions used in arguments, rather than their values.
This includes one example use-case: the typeexpr extension now includes
a cty.Function called ConvertFunc that takes a type expression as its
second argument. A type expression is not evaluatable in the usual sense,
but thanks to cty capsule types we _can_ produce a cty.Value from one
and then make use of it inside the function implementation, without
exposing this custom type to the broader language:
convert(["foo"], set(string))
This mechanism is intentionally restricted only to "argument-like"
locations where there is a specific type we are attempting to decode into.
For now, that's hcldec AttrSpec/BlockAttrsSpec -- analogous to gohcl
decoding into hcl.Expression -- and in arguments to functions.
This includes a new feature to allow extension properties associated with
capsule types, which HCL can in turn use to allow certain capsule types
to opt in to special handing at the HCL layer.
(There are no such hooks in use as of this commit, however.)
When we did the automatic rewriting from the hashicorp/hcl2 import paths
to hashicorp/hcl/v2 import paths, our automatic rewrite script did not
run "go fmt" afterwards and so left some imports misordered as a result
of the slightly-differently-shaped directory structure in the new
repository.
The two cases where we decode attribute values should include the
expression and EvalContext in any diagnostics they generate so that the
calling application can give hints about the types and values of variables
that are used within the expression.
This also includes some adjustments to the returned source ranges so that
both cases are consistent with one another and so that both indicate
the entire expression as the Subject and include the attribute name in
the Context. Including the whole expression in the range ensures that
when there's a problem inside a tuple or object constructor, for example,
we'll show the line containing the problem and not just the opening [
or { symbol.
Some HCL callers make the (reasonable) assumption that the overall source
range of an expression will be a superset of all of the ranges of its
child expressions, for purposes such as extraction of source code
snippets, parse tree annotation in hclwrite, text editor analysis
functions like "go to reference", etc.
The IndexExpr type was not previously honoring that assumption, since its
source range was placed around only the bracket portion. That is a good
region to use when reporting errors relating to the index operation, but
it is not a faithful representation of the full extent of the expression.
In order to meet both of these requirements at once, IndexExpr now has
both SrcRange covering the entire expression and BracketRange covering
the index part delimited by brackets. We can then use BracketRange in
our error messages but return SrcRange as the result of the general
Range method that is common to all expression types.
This patch release includes a couple small bug fixes that don't affect HCL
directly, and a slight improvement to error messages for failed
conversions from boolean to string.
The primary goal of this upgrade is to ratchet dependents on HCL up to the
newer version of cty in order to get those improved error messages, but
during upgrading they may also benefit from the other bug fixes.
We currently have functions for constructing new expressions from either
constant values or from traversals, but some surgical updates require
producing a more complex expression.
In the long run perhaps we'll have some mechanism for constructing valid
expressions via a high-level AST-like API, similar to what we already have
for structural constructs, but as a simpler first step here we add a
mechanism to just write raw tokens directly into an expression, with the
caller being responsible for making sure those tokens represent valid
HCL expression syntax.
Since this new API treats the given tokens as unstructured, the resulting
expression can't fully support the whole of the expression API, but it's
good enough for writing in complex expressions without disturbing existing
content elsewhere in the input file.
Currently, if nonzero struct is passed to the DecodeBody function,
decoding process will keep already initialized top-level fields values or
overwrite them, if they are specified in HCL. This behaviour is useful,
as it allows to have some default values for top-level fields.
However, if the field is a type block or slice (multiple blocks), then the
entire block is overwritten, which erases the existing values. Because of
that, setting default values in nested structs is not possible.
With this commit, decode functions will check if the value is
nil and only then set them to empty struct, which allows for appending
to existing structs.
In case of a slice, either new empty element will be added, or existing
element will be used for setting new value (so values will be merged).
Also, to keep the same behavior as json.Unmarshal, if retained list
have more elements than new list, additional elements will be removed
and existing elements will be merged. This allows to have default values
also for positional elements.
Behavior added by this patch is the same as in json.Unmarshal and
yaml.Unmarshal, which both retain nested structs during unmarshaling
process, so I believe this is an expected behavior from user
perspective.
In the long run we intend to transition to using CircleCI for all of the
testing steps here, but we need to keep appveyor enabled here until we
have both the HCL 1 and HCL 2 branches ready to use CircleCI, so this is
just a minimal appveyor configuration for running our tests in a Windows
environment in order to avoid errant test failures on PRs until we have
completed the move fully to CircleCI.
Previously we allowed adding both attributes and blocks, and we allowed
updating attributes, but we had no mechanism to surgically remove
attributes and blocks altogether.
In previous versions we had some bugs around template sequence escapes.
These tests show that they no longer seem to be present, and should
hopefully avoid them regressing in future.
Our error message for the ambiguous situation recommends doing this, but
the parser didn't actually previously allow it. Now we'll accept the form
that the error message recommends.
As before, we also accept a template with an interpolation sequence as
a disambiguation, but the error message doesn't mention that because it's
no longer idiomatic to use an inline string template containing just a
single interpolation sequence.
We remove properties whose values are null by default in order to produce
output that is more convenient to consume in the common case.
However, sometimes those nulls are significant, so we'll allow the user
to opt in to retaining them, at the expense of producing a result that is
more noisy if the spec contains lots of optional attributes that are not
set.