Previously we required optional attributes to be specified as pointers so that we could represent the empty vs. absent distinction.
For applications that don't need to make that distinction, representing "optional" as a struct tag is more convenient.
Previously we allowed arrays only at the "leaf" of a set of objects
describing a block and its labels. This is not sufficient because it is
therefore impossible to preserve the relative ordering of a sequence
of blocks that have different block types or labels.
The spec now allows arrays of objects to be used in place of single
objects when that value is representing either an HCL body or a set of
labels on a nested block. This relaxing does not apply to JSON objects
interpreted as expressions or bodies interpreted in dynamic attributes
mode, since there is no requirement to preserve attribute ordering or
support duplicate property names in those scenarios.
This new model imposes additional constraints on the underlying JSON
parser used to interpret JSON HCL: it must now be able to retain the
relative ordering of object keys and accept multiple definitions of the
same key. This requirement is not imposed on _producers_, which are free
to use the allowance for arrays of objects to force ordering and duplicate
keys with JSON-producing libraries that are unable to make these
distinctions.
Since we are now requiring a specialized parser anyway, we also require
that it be able to represent numbers at full precision, whereas before
we made some allowances for implementations to not support this.
The peeker has an "include newlines" stack which the parser manipulates
to switch between the newline-sensitive and non-sensitive scanning modes.
If the parser code fails to manage this stack correctly (for example,
due to a missed call to PopIncludeNewlines) then this causes very
confusing downstream errors that are otherwise difficult to debug.
As an extra debug tool for when errors _are_ detected, when this problem
is encountered during tests we are able to produce a visualization of the
pushes and pops to help the test developer see which pushes and pops
seem out of place.
This is a lot of ugly extra code but it's usually disabled and seems worth
it to allow us to catch quickly bugs that would otherwise be quite
difficult to diagnose.
Previously it was mismanaging the stack by first pushing on "false" and
then trying to undo that by pushing on "true". Instead, it should just
pop off the "false" to return to whatever the previous setting was, since
indexing brackets might already be inside a no-newlines context.
We were previously using an ugly combination of "pretty" and "spew" to
do this, which never really quite worked because of limitations in each
of those.
deep.Equal doesn't produce quite as much detailed information as the
others, but it has the advantage of showing exactly where a difference
exists rather than forcing us to hunt through a noisy diff to find it.
Fuzz testing revealed that there were a few different crashers in the
string literal decoder, which was previously a rather-unweildy
hand-written scanner with manually-implemented lookahead.
Rather than continuing to hand-tweak that code, here instead we use
ragel (which we were already using for the main scanner anyway) to
partition our string literals into tokens that are easier for our
decoder to wrangle.
As a bonus, this also makes our source ranges in our diagnostics more
accurate.
Now that we have the necessary functions to deal with this in the
low-level HCL API, it's more intuitive to use bare identifiers for these
parameter names. This reinforces the idea that they are symbols being
defined rather than arbitrary string expressions.
In a few specific portions of the spec format it's convenient to have
access to some of the functions defined in the cty stdlib. Here we allow
them to be used when constructing the value for a "literal" spec and in
the result expression for a "transform" spec.
This new spec type allows evaluating an arbitrary expression on the
result of a nested spec, for situations where the a value must be
transformed in some way.
This is essentially a CLI wrapper around the hcldec package, accepting a
decoding specification via a HCL-based language and using it to translate
input HCL files into JSON values while performing basic structural and
type validation of the input files.
A common pattern is emerging in calling applications of using single-item
absolute traversals to give the impression of static language keywords.
This new function makes that explicitly possible and allows a convenient
pattern for doing so that should improve the readability of a calling
application making use of it.
Calling applications often need to validate strings provided by the user
that will eventually be variable or attribute names in the evaluation
scope, to ensure that they will be evaluable.
Rather than having each application specify its own different subset of
the full set we support (which is derived from Unicode specifications),
we provide a simple function to let callers easily check the validity
of a potential identifier using exactly the same scanning rules we use
within the expression scanner.
To achieve this we actually invoke the scanner and then assert on its
result, which is a pretty expensive way to just check one string but it's
easy to do with code we already have in place and we don't expect this
sort of validation to be going on in a tight loop.
The readme was previously unclear about the fact that HCL is not a configuration language in itself but rather a toolkit for defining and parsing configuration languages.
It may still not be totally clear, but it is hopefully clearer than it was.
These allow the inclusion of arbitrary unicode codepoints (always encoded
as UTF-8) using a hex representation.
\u expects four digits and can thus represent only characters in the basic
multilingual plane.
\U expects eight digits and can thus represent all unicode characters,
at the cost of being extra-verbose.
Since our parser properly accounts for unicode characters (including
combining sequences) it's recommended to include them literally (UTF-8
encoded) in source code, but these sequences are useful for explicitly
representing non-printable characters that could otherwise appear
invisible in source code, such as zero-width modifier characters.
This fixes#6.
We inherited a restriction from an early zcl prototype here, but it's
far too strict to prohibit tabs entirely and so we'll accept them and
just treat them as spaces for column-counting purposes.
Tabs are still not _advised_, since they add extra complexity for problems
like generating annotated source code snippets (can't necessarily know
how large the tab stop is going to be) or doing surgical updates to
existing source files. The canonical formatting applied by hclwrite's
Format function will still eliminate all tabs, imposing the canonical
style of two spaces per indent level.
This fixes#2.
An earlier iteration of this package was able to optionally use HIL as
its expression engine in place of the hclsyntax expression parser, but
this has since been removed and so this flag no longer has any effect.
Consequently, the public functions ParseWithHIL and ParseFileWithHIL were,
in fact, just using the zclsyntax parser and thus behaving identically to
the Parse and ParseFile functions.
A pattern has emerged of wrapping Expression instances with other
Expressions in order to subtly modify their behavior. A key example of
this is in ext/dynblock, where wrap an expression in order to introduce
our additional iteration variable for expressions in dynamic blocks.
Rather than having each wrapper expression implement wrapping
implementations for our various syntax-level-analysis functions (like
ExprList and AbsTraversalForExpr), instead we define a standard mechanism
to unwrap expressions back to the lowest-level object -- usually an AST
node -- and then use this in all of our analyses that look at the
expression's structure rather than its value.
Terraform is the prime use-case for the dynblock extension, so we'll
include this here currently as a proof-of-concept for Terraform's usage,
but eventually (once Terraform is actually using it) this'll give some
insurance that it doesn't get broken.
For applications already using hcldec, a decoder specification can be used
to automatically drive the recursive variable detection walk that begins
with WalkForEachVariables, allowing all "for_each" and "labels" variables
in a recursive block structure to be detected in a single call.
This function returns a map describing all of the child block types
declared inside a spec. This can be used for recursive decoding of bodies
using the low-level HCL API, though in most cases callers should just use
Decode which does recursive decoding of an entire nested structure in
a single call.
The previous ForEachVariables method was flawed because it didn't have
enough information to properly analyze child blocks. Since the core HCL
API requires a schema for any body analysis, and since a schema only
describes one level of configuration structure at a time, we must require
callers to drive a recursive walk through their nested block structure so
that the correct schema can be provided at each level.
This API is rather more complex than is ideal, but is the best we can do
with the HCL Body API as currently defined, and it's currently defined
that way in order to properly support ambiguous syntaxes like JSON.
This extension allows an application to support dynamic generation of
child blocks based on expressions in certain contexts. This is done using
a new block type called "dynamic", which contains an iteration value
(which must be a collection) and a specification of how to construct a
child block for each element of that collection.
To assist in testing code that depends on hcl.ExprList and
hcl.AbsTraversalForExpr we now implement the necessary interfaces on our
existing MockExprLiteral and MockExprVariable, as well as adding new
functions MockExprList and MockExprTraversal that more directly serve
those interfaces with full functionality.
In early prototyping the template control sequence introducer was
specified as !{, but that changed to %{ along the way because it seemed
more intuitive and less likely to collide with literal strings.
However, the parser's string literal handling still had remnants of the
old syntax, causing strange quirks in parsing strings that contained
exclamation points.
Now we correctly expect %{ as the control sequence introducer, %%{ as its
escape sequence, and additionally fix a bug where previously template
sequence introduction characters at the end of a string literal would
be silently dropped due to them representing an unterminated escape
sequence.
This fixes#3.
Traversals are always passed by value, so returning a pointer here is
inconsistent with how hcl.TraverseIndex is used elsewhere and thus makes
life inconvenient for callers making type assertions.
In complex expressions it can be hard to determine which portion is
relevant when we print a diagnostic message. To address this, when color
is enabled we bold and underline the "subject" portion of the source code,
which then makes it stand out within the full lines of code we print
in the snippet.
Now that we have helper methods for computing relationships between
ranges, we can eliminate all of the tricky line-counting and byte-counting
code here and instead use the higher-level operations.
The result is a single loop using the RangeScanner.
This is a convenience wrapper around SourceRange.Overlap that also
calculates the ranges in the receiver that _aren't_ overlapping with the
given range.
This is useful when, for example, partitioning a portion of source code
to insert markers to highlight the location of an error, as we do when
printing code snippets as part of diagnostic output.
This is a generalization of RangeBetween that finds a single range that
covers the full extent of both given ranges, possibly also including some
additional content between the ranges if they do not overlap.
RangeScanner has an interface similar to bufio.Scanner for partitioning
a buffer into tokens, but it returns the hcl.Range of each token along
with that token so that the caller can see where the token fits in
relation to the entire source file.
The main intended use-case for this is to partition a source file into
lines for the purpose of printing a source code snippet in diagnostic
output. Having the source location information is important in that case
to recognize which lines belong to the subject and context of each
diagnostic.
This is useful, for example, when printing source snippets to the terminal
as part of diagnostics, in order to detect the portion of the source code
that coincides with the subject or context of each diagnostic.
This can be useful, for example, when using Expression.Variables to
pre-validate all of the referenced variables before evaluation, so that
the traversal source ranges can be included in any generated diagnostics.
As an extra level of confidence in addition to the unit tests, this
integration test verifies that a certain set of features that Terraform
uses are able to work properly together.
Terraform is used as an example here just because it's a more advanced
consumer of HCL and thus it exercises some codepaths that most
applications don't need, such as ExprList and AbsTraversalForExpr.