Previously this implementation was doing only one level of recursion in
its walk, which gave the appearance of working until the
transform/container-type specs (DefaultSpec, TransformSpec, ...) were
introduced, creating the possibility of "same body children" being more
than one level away from the initial spec.
It's still correct to only process the schema and content once, because
ImpliedSchema is already collecting all of the requirements from the
"same body children", and so our content object will include everything
that the nested specs should need to analyze needed variables.
This is another heuristic because the "[" syntax is also the tuple
constructor start marker, but this takes care of the common cases of
indexing keywords and bracketed expressions.
This fixes#29.
We automatically convert from set to list in many other situations, so for
consistency we should accept sets here too and just treat them as
unordered sequences.
This closes#30.
Due to the special handling of the anonymous symbol employed to evaluate
a splat expression, we need to employ a lock on that symbol so that it's
safe for concurrent evaluation.
As before, it's not safe to concurrently evaluate the same expression in
the same context, but it is now safe to do so as long as all concurrent
evaluations have a _distinct_ EvalContext.
This fixes#28.
Previously it was not implementing the two optional interfaces required
for this, and so decoding would fail for any AttrSpec or block spec nested
inside.
Now it passes through attribute requirements from both the primary and
default, and passes block requirements only from the primary, thus
allowing either fallback between two attributes, fallback from an
attribute to a constant, or fallback from a block to a constant. Other
permutations are also possible, but not very important.
This change splits out the formatting of simple single-line lists. A list
is considered "simple" if all of its elements are on one line, all
elements are literals (except heredoc) and there are no line comments.
As an exception, a heredoc string is allowed when it is the only element
in the list.
This fixes an issue with a single-line list with one element and a line
comment. The formatter used to pull the closing bracket on the same line
(after the comment), causing parse errors.
When there are multiple cartridge returns at the end of the line, the regular expression will consider n-1 of them to be part of the string. Later, the last `\r` is removed. That may mean that a line that did previously *not* terminate a heredoc string may now terminate it, changing the meaning of the HCL file.
This (invalid) Unicode codepoint is used by the printer package to fix up
the indentation of generated files. If this codepoint is present in the
input, the package gets confused and removes more than it should,
producing unparsable output.
Previously we were only counting a \n as starting a new line, so input
using \r\n endings would get treated as one long line for source-range
purposes.
Now we also consider \r\n to be a newline marker, resetting the column
count to zero and incrementing the line just as we would do for a single
\n. This is made easier because the unicode definition of "grapheme
cluster" considers \r\n to be a single character, so we don't need to
do anything special in order to match it.
Previously, due to how heredoc scanning was implemented, the closing
marker for a heredoc would consume the newline that terminated it. This
was problematic in any context that is newline-sensitive, because it
would cause us to skip the TokenNewline that might terminate e.g. an
attribute definition:
foo = <<EOT
hello
EOT
bar = "hello"
Previously the "foo" attribute would fail to parse properly due to trying
to consume the "bar" definition as part of its expression.
Now we synthetically split the marker token into two parts: the marker
itself and the newline that follows it. This means that using a heredoc
in any context where newlines are sensitive will involuntarily introduce
a newline, but that seems consistent with user expectation based on how
heredocs seem to be used "in the wild".
This uses the expression static analysis features to interpret
a combination of static calls and static traversals as the description
of a type.
This is intended for situations where applications need to accept type
information from their end-users, providing a concise syntax for doing
so.
Since this is implemented using static analysis, the type vocabulary is
constrained only to keywords representing primitive types and type
construction functions for complex types. No other expression elements
are allowed.
A separate function is provided for parsing type constraints, which allows
the additonal keyword "any" to represent the dynamic pseudo-type.
Finally, a helper function is provided to convert a type back into a
string representation resembling the original input, as an aid to
applications that need to produce error messages relating to user-entered
types.
Implementing the config loader for Terraform led to the addition of some
special static analysis operations for expressions, separate from the
usual action of evaluating an expression to produce a value.
These operations are useful for building application-specific language
constructs within HCL syntax, and so they are now included as part of the
specification in order to help developers of other applications understand
their behaviors and the implications of using them.
This accompanies ExprList, ExprMap, and AbsTraversalForExpr to
complete the set of static analysis interfaces for digging down into the
expression syntax structures without evaluation.
The intent of this function is to be a little like AbsTraversalForExpr
but for function calls. However, it's also similar to ExprList in that
it gives access to the raw expression objects for the arguments, allowing
for recursive analysis.
We recognize and allow naked $ and % sequences by reading ahead one more
character to see if it's a "{" that would introduce an interpolation or
control sequence.
Unfortunately this is problematic in the end condition because it can
"eat" the terminating character and cause the scanner to continue parsing
a template when the user intended the template to end.
Handling this is a bit messy. For the quoted and heredoc situations we
can use Ragel's fhold statement to "backtrack" to before the character
we consumed, which does the trick. For bare templates this is insufficient
because there _is_ no following character and so the scanner detects this
as an error.
Rather than adding even more complexity to the state machine, instead we
just handle as a special case invalid bytes at the top-level of a bare
template, returning them as a TokenStringLit instead of a TokenInvalid.
This then gives the parser what it needs.
The fhold approach causes some odd behavior where an escaped template
introducer character causes a token split and two tokens are emitted
instead of one. This is weird but harmless, since we'll ultimately just
concatenate all of these strings together anyway, and so we allow it
again to avoid making the scanner more complex when it's easy enough to
handle this in the parser where we have more context.
This was allowed in legacy HCL, and although it was never documented as
usable in the Terraform documentation it appears that some Terraform
configurations use this form anyway.
While it is non-ideal to have another edge-case to support/maintain, this
capability adds no ambiguity and doesn't add significant complexity, so
we'll allow it to be pragmatic for existing usage.
Terraform allowed indexing like foo.0.bar to work around HIL limitations,
and so we'll permit that as a pragmatic way to accept existing Terraform
configurations.
However, we can't support this fully because our parser thinks that
chained number indexes, like foo.0.0.bar, are single numbers. Since that
usage in Terraform is very rare (there are very few lists of lists) we
will mark that situation as an error with a helpful message suggesting
to use the modern index syntax instead.
This also turned up a similar bug in the existing legacy index handling
we were doing for splat expressions, which is now handled in the same
way.
We are leaning on the unicode identifier definitions here, but the
specified ID_Start does not include the underscore character and users
seem to expect this to be allowed due to experience with other languages.
Since allowing a leading underscore introduces no ambiguity, we'll allow
it. Calling applications may choose to reject it if they'd rather not have
such weird names.
Previously we missed the '%' character in our "SelfToken" production,
which meant that the modulo operator could not parse properly due to it
being represented as a TokenInvalid.
Due to some earlier limitations of the parser we required each attribute
and block to end with a newline, even if it appeared at the end of a
file. In effect, this required all files to end with a newline character.
This is no longer required and so we'll tolerate that missing newline for
pragmatic reasons.
Elsewhere we are using 512-bit precision as the standard for converting
from a string to a number, since the default is shorter. This is just to
unify JSON parsing with the native syntax processing and the automatic
type conversions in the language, so we don't see different precision
behaviors depending on syntax.
big.Float is not DeepEqual-friendly because it contains a precision value
that can make two numerically-equal values appear as non-equal.
Since the number decoding isn't the point of these tests, instead we just
swap out for cty.Bool values which _are_ compatible with
reflect.DeepEqual, since they are just wrappers around the native bool
type.
The contract for AbsTraversalForExpr calls for us to interpret an
expression as if it were traversal syntax. Traversal syntax does not have
the special keywords "null", "true" and "false", so we must interpret
these as TraverseRoot rather than as literal values.
Previously this wasn't working because the parser converted these to
literals too early. To make this work properly, we implement
AbsTraversalForExpr on literal expressions and effectively "undo" the
parser's re-interpretation of these keywords to back out to the original
keyword strings.
We also rework how object keys are handled so that we wait until eval time
to decide whether to interpret the key expression as an unquoted literal
string. This allows us to properly support AbsTraversalForExpr on keys
in object constructors, bypassing the string-interpretation behavior in
that case.