Previously we allowed arrays only at the "leaf" of a set of objects
describing a block and its labels. This is not sufficient because it is
therefore impossible to preserve the relative ordering of a sequence
of blocks that have different block types or labels.
The spec now allows arrays of objects to be used in place of single
objects when that value is representing either an HCL body or a set of
labels on a nested block. This relaxing does not apply to JSON objects
interpreted as expressions or bodies interpreted in dynamic attributes
mode, since there is no requirement to preserve attribute ordering or
support duplicate property names in those scenarios.
This new model imposes additional constraints on the underlying JSON
parser used to interpret JSON HCL: it must now be able to retain the
relative ordering of object keys and accept multiple definitions of the
same key. This requirement is not imposed on _producers_, which are free
to use the allowance for arrays of objects to force ordering and duplicate
keys with JSON-producing libraries that are unable to make these
distinctions.
Since we are now requiring a specialized parser anyway, we also require
that it be able to represent numbers at full precision, whereas before
we made some allowances for implementations to not support this.
The peeker has an "include newlines" stack which the parser manipulates
to switch between the newline-sensitive and non-sensitive scanning modes.
If the parser code fails to manage this stack correctly (for example,
due to a missed call to PopIncludeNewlines) then this causes very
confusing downstream errors that are otherwise difficult to debug.
As an extra debug tool for when errors _are_ detected, when this problem
is encountered during tests we are able to produce a visualization of the
pushes and pops to help the test developer see which pushes and pops
seem out of place.
This is a lot of ugly extra code but it's usually disabled and seems worth
it to allow us to catch quickly bugs that would otherwise be quite
difficult to diagnose.
Previously it was mismanaging the stack by first pushing on "false" and
then trying to undo that by pushing on "true". Instead, it should just
pop off the "false" to return to whatever the previous setting was, since
indexing brackets might already be inside a no-newlines context.
We were previously using an ugly combination of "pretty" and "spew" to
do this, which never really quite worked because of limitations in each
of those.
deep.Equal doesn't produce quite as much detailed information as the
others, but it has the advantage of showing exactly where a difference
exists rather than forcing us to hunt through a noisy diff to find it.
Fuzz testing revealed that there were a few different crashers in the
string literal decoder, which was previously a rather-unweildy
hand-written scanner with manually-implemented lookahead.
Rather than continuing to hand-tweak that code, here instead we use
ragel (which we were already using for the main scanner anyway) to
partition our string literals into tokens that are easier for our
decoder to wrangle.
As a bonus, this also makes our source ranges in our diagnostics more
accurate.
A common pattern is emerging in calling applications of using single-item
absolute traversals to give the impression of static language keywords.
This new function makes that explicitly possible and allows a convenient
pattern for doing so that should improve the readability of a calling
application making use of it.
Calling applications often need to validate strings provided by the user
that will eventually be variable or attribute names in the evaluation
scope, to ensure that they will be evaluable.
Rather than having each application specify its own different subset of
the full set we support (which is derived from Unicode specifications),
we provide a simple function to let callers easily check the validity
of a potential identifier using exactly the same scanning rules we use
within the expression scanner.
To achieve this we actually invoke the scanner and then assert on its
result, which is a pretty expensive way to just check one string but it's
easy to do with code we already have in place and we don't expect this
sort of validation to be going on in a tight loop.
These allow the inclusion of arbitrary unicode codepoints (always encoded
as UTF-8) using a hex representation.
\u expects four digits and can thus represent only characters in the basic
multilingual plane.
\U expects eight digits and can thus represent all unicode characters,
at the cost of being extra-verbose.
Since our parser properly accounts for unicode characters (including
combining sequences) it's recommended to include them literally (UTF-8
encoded) in source code, but these sequences are useful for explicitly
representing non-printable characters that could otherwise appear
invisible in source code, such as zero-width modifier characters.
This fixes#6.
We inherited a restriction from an early zcl prototype here, but it's
far too strict to prohibit tabs entirely and so we'll accept them and
just treat them as spaces for column-counting purposes.
Tabs are still not _advised_, since they add extra complexity for problems
like generating annotated source code snippets (can't necessarily know
how large the tab stop is going to be) or doing surgical updates to
existing source files. The canonical formatting applied by hclwrite's
Format function will still eliminate all tabs, imposing the canonical
style of two spaces per indent level.
This fixes#2.
An earlier iteration of this package was able to optionally use HIL as
its expression engine in place of the hclsyntax expression parser, but
this has since been removed and so this flag no longer has any effect.
Consequently, the public functions ParseWithHIL and ParseFileWithHIL were,
in fact, just using the zclsyntax parser and thus behaving identically to
the Parse and ParseFile functions.
A pattern has emerged of wrapping Expression instances with other
Expressions in order to subtly modify their behavior. A key example of
this is in ext/dynblock, where wrap an expression in order to introduce
our additional iteration variable for expressions in dynamic blocks.
Rather than having each wrapper expression implement wrapping
implementations for our various syntax-level-analysis functions (like
ExprList and AbsTraversalForExpr), instead we define a standard mechanism
to unwrap expressions back to the lowest-level object -- usually an AST
node -- and then use this in all of our analyses that look at the
expression's structure rather than its value.
Terraform is the prime use-case for the dynblock extension, so we'll
include this here currently as a proof-of-concept for Terraform's usage,
but eventually (once Terraform is actually using it) this'll give some
insurance that it doesn't get broken.
In early prototyping the template control sequence introducer was
specified as !{, but that changed to %{ along the way because it seemed
more intuitive and less likely to collide with literal strings.
However, the parser's string literal handling still had remnants of the
old syntax, causing strange quirks in parsing strings that contained
exclamation points.
Now we correctly expect %{ as the control sequence introducer, %%{ as its
escape sequence, and additionally fix a bug where previously template
sequence introduction characters at the end of a string literal would
be silently dropped due to them representing an unterminated escape
sequence.
This fixes#3.
Traversals are always passed by value, so returning a pointer here is
inconsistent with how hcl.TraverseIndex is used elsewhere and thus makes
life inconvenient for callers making type assertions.
In complex expressions it can be hard to determine which portion is
relevant when we print a diagnostic message. To address this, when color
is enabled we bold and underline the "subject" portion of the source code,
which then makes it stand out within the full lines of code we print
in the snippet.
Now that we have helper methods for computing relationships between
ranges, we can eliminate all of the tricky line-counting and byte-counting
code here and instead use the higher-level operations.
The result is a single loop using the RangeScanner.
This is a convenience wrapper around SourceRange.Overlap that also
calculates the ranges in the receiver that _aren't_ overlapping with the
given range.
This is useful when, for example, partitioning a portion of source code
to insert markers to highlight the location of an error, as we do when
printing code snippets as part of diagnostic output.
This is a generalization of RangeBetween that finds a single range that
covers the full extent of both given ranges, possibly also including some
additional content between the ranges if they do not overlap.
RangeScanner has an interface similar to bufio.Scanner for partitioning
a buffer into tokens, but it returns the hcl.Range of each token along
with that token so that the caller can see where the token fits in
relation to the entire source file.
The main intended use-case for this is to partition a source file into
lines for the purpose of printing a source code snippet in diagnostic
output. Having the source location information is important in that case
to recognize which lines belong to the subject and context of each
diagnostic.
This is useful, for example, when printing source snippets to the terminal
as part of diagnostics, in order to detect the portion of the source code
that coincides with the subject or context of each diagnostic.
This can be useful, for example, when using Expression.Variables to
pre-validate all of the referenced variables before evaluation, so that
the traversal source ranges can be included in any generated diagnostics.
As an extra level of confidence in addition to the unit tests, this
integration test verifies that a certain set of features that Terraform
uses are able to work properly together.
Terraform is used as an example here just because it's a more advanced
consumer of HCL and thus it exercises some codepaths that most
applications don't need, such as ExprList and AbsTraversalForExpr.
This helper allows a calling application to require a given expression be
some sort of list constructor (tuple constructor in native syntax, or
array in JSON) and peel off that outer level of list to obtain a slice
of the Expression objects inside.
This is useful in rare cases where the calling application needs to
extract the expressions within the list without evaluating the entire list
expression first. For example, the expressions that result from this
function might be passed into AbsTraversalForExpr in situations where the
caller requires a static list of static traversals that will never
actually be evaluated.
These functions permit a calling application to recognize when an
expression represents a static absolute traversal and obtain that
traversal. This allows for the unusual-but-valid case where an application
wishes to access the expression source rather than its resulting value,
when the expression source is something that can be understood as a
traversal.
An example use-case is an attribute that takes a list of other attributes
it depends on, expressed as traversals. In this case the calling
application needs to access the attribute names themselves rather than
their values, e.g. to build some sort of dependency graph to gradually
populate the scope for evaluation.
Sometimes we want an expression that just wraps a static value, e.g. for
testing or to provide a default value for a missing attribute.
StaticExpr gives us a convenient way to do that, returning a value that
implements the Expression interface by returning just the given static
value.
This is a super-invasive update since the "zcl" package in particular
is referenced all over.
There are probably still a few zcl references hanging around in comments,
etc but this takes care of most of it.