it's acceptable to have "Variables" set to nil in an EvalContext, if
a particular scope has no variables at all. If _no_ contexts in the
chain have non-nil variables, this is considered to mean that variables
are not allowed at all, which produces a different error message.
This syntax func(arg...) allows the final argument to be a sequence-typed
value that then expands to be one argument for each element of the
value.
This allows applications to define variadic functions where that's
user-friendly while still allowing users to pass tuples to those functions
in situations where the args are chosen dynamically.
The purpose of the Variables function is to tell a calling application
what symbols need to be present in the _root_ scope, so it would be
unhelpful to include child scope traversals. Child scopes are populated
by the nodes that create them, and are thus not interesting to the
calling application (for this purpose, at least).
The ForExpr is essentially a list/map comprehension, allowing projecting
one expression into another. From a syntactic standpoint it's the most
complex structure we've dealt with so far, with many separate parts.
The tests introduced here are not exhaustive but illustrate that the
basic mechanism is working.
Previously operations were an enum, but we've ended up needing to store
a collection of values against each, so an operation being a pointer to
a struct feels more natural.
This in turn allows us to more easily fix the return types of the
operations, so that we don't need to do any unusual work to understand
that (for example) arithmetic always returns a number.
Previously we were detecting the exactly-one-part case at parse time and
skipping the TemplateExpr node in the AST, but this was problematic
because we then misaligned the source range for the expression to the
_content_ of the quotes, rather than including the quotes themselves.
As well as producing confusing diagnostics, this also caused problems for
zclwrite since it relies on source ranges to map our AST back onto the
source tokens it came from.
Previously we tolerated EOF as an alias for newline, but a file without
a newline at the end is a edge case primarily limited to contrived
examples in unit tests, and requiring it simplifies tasks such as code
generation in zclwrite, since we can always assume that every block item
comes with a built-in line terminator.
When we're skipping comments but retaining newlines, we need to do some
slight-of-hand because single-line comment tokens contain the newline
that terminates them (for simpler handling of lead doc comments) but our
parsing can be newline-sensitive.
To allow for this, as a special case we transform single-line comment
tokens into newlines when in this situation, thus allowing parser code
to just worry about the newlines and skip over the comments.
The native parser's ranges don't include any surrounding comments, so we
need to do a little more work to pick them out of the surrounding token
sequences.
This just takes care of _lead_ comments, which are those that appear as
whole line comments above the item in question. Line comments, which
appear after the item on the same line, will follow in a later commit.
Indexing is pretty fundamental and it's also non-trivial, so having this
exposed will make it easier for it to be implemented consistently across
many different callers, including within calling applications.
This can either be a traversal or a first-class node depending on whether
the given expression is a literal. This exception is made to allow
applications to conditionally populate only part of a potentially-large
collection if the config is only requesting one or two distinct indices.
In particular, it allows the following to be considered a single traversal
from the scope:
foo.bar[0].baz
stringer is more sensitive to certain errors than other generators, so
by running it last we give the other generators a chance to get things
straight before we ask stringer to run.
Traversal operators are the operators that can appear after a value
to traverse into the data structure that value represents. So far only
the attribute access operator is implemented.
This allows tooling to get a string describing the context of a particular
offset into the file. This is used, for example, to provide context
above the source code snippets in console-printed diagnostic messages.
There are certain tokens that are _never_ valid, so we might as well
catch them early in the Lex... functions rather than having to handle
them in many different contexts within the parser.
Unfortunately for now when such errors occur they tend to be echoed by
more confusing errors coming from the parser, but we'll accept that for
now.
This allows code that only deals with BodyContent (post-decoding) to
still be able to report on missing items within the associated body while
providing a suitable source location.
This applies both two the whole of bare expressions and to any nested
expressions within parentheses. The latter means that an attribute value
can span over multiple lines if it's wrapped in parens:
foo = (
1 + 2 + 3 + 4 + 5
)
The ~ character can be used at the start and end of interpolation
sequences to trim off whitespace in neighboring literals, with the goal
of allowing extra whitespace to be included for readability without
including it in the resulting string.
Previously we were failing to return back to template-scanning mode due
to decrementing "braces" too early, causing the remainder of the template
to be scanned as if it were an expression.