Attribute-only splat expressions cannot have other splats nested inside,
since we're only interested in supporting how these behaved for HIL when
running inside Hashicorp Terraform. More complex cases should be dealt
with using either full splats (bracketed *) or "for" expressions.
Attribute-only splat expressions, indicated by using the star symbol as
if it were an attribute, are inherited from HIL and are a limited sort of
splat that only works for walking through attributes in each item of its
source.
Later zcl will also get full splat expressions, whose syntax is using
the star symbol as if it were an _index_, which will generalize to
supporting _all_ postfix traversal operations, including indexing and
nested splats.
This syntax func(arg...) allows the final argument to be a sequence-typed
value that then expands to be one argument for each element of the
value.
This allows applications to define variadic functions where that's
user-friendly while still allowing users to pass tuples to those functions
in situations where the args are chosen dynamically.
The ForExpr is essentially a list/map comprehension, allowing projecting
one expression into another. From a syntactic standpoint it's the most
complex structure we've dealt with so far, with many separate parts.
The tests introduced here are not exhaustive but illustrate that the
basic mechanism is working.
Previously operations were an enum, but we've ended up needing to store
a collection of values against each, so an operation being a pointer to
a struct feels more natural.
This in turn allows us to more easily fix the return types of the
operations, so that we don't need to do any unusual work to understand
that (for example) arithmetic always returns a number.
Previously we were detecting the exactly-one-part case at parse time and
skipping the TemplateExpr node in the AST, but this was problematic
because we then misaligned the source range for the expression to the
_content_ of the quotes, rather than including the quotes themselves.
As well as producing confusing diagnostics, this also caused problems for
zclwrite since it relies on source ranges to map our AST back onto the
source tokens it came from.
Previously we tolerated EOF as an alias for newline, but a file without
a newline at the end is a edge case primarily limited to contrived
examples in unit tests, and requiring it simplifies tasks such as code
generation in zclwrite, since we can always assume that every block item
comes with a built-in line terminator.
When we're skipping comments but retaining newlines, we need to do some
slight-of-hand because single-line comment tokens contain the newline
that terminates them (for simpler handling of lead doc comments) but our
parsing can be newline-sensitive.
To allow for this, as a special case we transform single-line comment
tokens into newlines when in this situation, thus allowing parser code
to just worry about the newlines and skip over the comments.
This can either be a traversal or a first-class node depending on whether
the given expression is a literal. This exception is made to allow
applications to conditionally populate only part of a potentially-large
collection if the config is only requesting one or two distinct indices.
In particular, it allows the following to be considered a single traversal
from the scope:
foo.bar[0].baz
Traversal operators are the operators that can appear after a value
to traverse into the data structure that value represents. So far only
the attribute access operator is implemented.
This applies both two the whole of bare expressions and to any nested
expressions within parentheses. The latter means that an attribute value
can span over multiple lines if it's wrapped in parens:
foo = (
1 + 2 + 3 + 4 + 5
)
The ~ character can be used at the start and end of interpolation
sequences to trim off whitespace in neighboring literals, with the goal
of allowing extra whitespace to be included for readability without
including it in the resulting string.
Conditional expression parsing is ostensibly implemented, but since it
depends on the rest of the expression parsers -- not yet implemented --
it cannot be tested in isolation.
Also includes an initial implementation of the conditional expression
node, but this is also not yet tested and so may need further revisions
once we're in a better position to test it.
This rewrite of decodeQuotedLit, now called decodeStringLit, is able to
handle both cases with a single function, and also now correctly handles
situations where double-$ and double-! are not followed immediately by
a { symbol, and must thus be treated literally.
The context where a string literal was found affects what sort of escaping
it can have, so we need to distinguish these cases so that we will only
look for and handle backslash escapes in quoted strings.
This recovery method attempts to place the peeker directly after the
newline indicating the end of the current body item. It does this by
counting open and close bracketing constructs and then returning when
a newline is encountered with no bracketing constructs open.
It's designed for use in the "header" part of a body item, with no
bracketing constructs open yet. It _might_ work in other situations, but
is likely to end up choosing the wrong end point if used in the middle
of a bracketed expression that itself contains newlines.
The parser has some recovery heuristics but they will not always achieve
the best result. To prevent unsuccessful recovery from causing a cascade
of confusing follow-on errors, we'll track in the parser whether recovery
has been attempted and then the specific sub-parsers can use this to
skip certain "unexpected token type" errors in recovery situations,
assuming instead that an earlier error will cover the problem and that
we just want to bail out quickly.
The scanner produces complicated sequences for quoted strings due to the
template language, but sometimes we just want a simple string with no
interpolations.