Conditional expression parsing is ostensibly implemented, but since it
depends on the rest of the expression parsers -- not yet implemented --
it cannot be tested in isolation.
Also includes an initial implementation of the conditional expression
node, but this is also not yet tested and so may need further revisions
once we're in a better position to test it.
This rewrite of decodeQuotedLit, now called decodeStringLit, is able to
handle both cases with a single function, and also now correctly handles
situations where double-$ and double-! are not followed immediately by
a { symbol, and must thus be treated literally.
The context where a string literal was found affects what sort of escaping
it can have, so we need to distinguish these cases so that we will only
look for and handle backslash escapes in quoted strings.
This recovery method attempts to place the peeker directly after the
newline indicating the end of the current body item. It does this by
counting open and close bracketing constructs and then returning when
a newline is encountered with no bracketing constructs open.
It's designed for use in the "header" part of a body item, with no
bracketing constructs open yet. It _might_ work in other situations, but
is likely to end up choosing the wrong end point if used in the middle
of a bracketed expression that itself contains newlines.
These are the top-level interface to parsing configuration files in the
native zcl syntax, similarly to the existing methods for parsing other
syntaxes.
The parser has some recovery heuristics but they will not always achieve
the best result. To prevent unsuccessful recovery from causing a cascade
of confusing follow-on errors, we'll track in the parser whether recovery
has been attempted and then the specific sub-parsers can use this to
skip certain "unexpected token type" errors in recovery situations,
assuming instead that an earlier error will cover the problem and that
we just want to bail out quickly.
The scanner produces complicated sequences for quoted strings due to the
template language, but sometimes we just want a simple string with no
interpolations.
This will make it easier for the parser to walk through the token sequence
with one token of lookahead, maintaining state of where we've got to and
also allowing us to switch into and out of the newline-sensitive mode
as we transition to and from newline-sensitive constructs.
Mirroring the Lex... triplet of functions, callers here can choose to
parse in one of three modes:
- ParseConfig is the main entry point and parses what we might consider
a whole config file.
- ParseExpression parses a sequence as an isolated expression, which
may be useful in implementing a REPL to inspect configuration or
application state.
- ParseTemplate parses a sequence as an isolated template, which may be
useful in parsing external files as templates, for situations where the
input is too large to be conveniently embedded within configuration.
In zclwrite we throw away the absolute source position information and
instead just retain the number of spaces before each token. This different
model allows us to rewrite parts of the token sequence without needing
to re-adjust all of the positions, and it also allows us to do simple
indentation and spacing adjustments just by walking through the token
list and adjusting these numbers.
This LexConfig, LexExpression and LexTemplate set of functions allow
outside callers to use the scanner in isolation, skipping the parser.
This may be useful for use-cases such as syntax highlighting, separate
parsers (such as the one in zclwrite), and so forth. Most callers should
use the parser (once implemented) though, to get a semantic AST.
This alternative scanning mode makes the scanner start in template
context rather than normal context. This will be later used by the parser
to allow parsing of standalone templates that aren't embedded inside a
zcl configuration file.
This is important because our syntax for objects uses newlines as the
separator between items, so this is the only signal we'll get that a
given item has ended and another is beginning.
A scanner "mode" decides which state it starts in, allowing us to start
in template mode for parsing top-level templates. However, currently the
only mode implemented is "normal" mode, which is the behavior we had
before.
This requires some extra state-keeping because we allow templates to be
nested inside templates. This takes us outside of the world of regular
languages, but we accept that here because it makes things easier to
deal with down the line in the parser.
The methodology is to keep track of how many braces are open at a given
time and then, when a nested template interpolation begins, record the
current brace level. Then, when a closing brace is encountered, if its
nesting level is at the top of the stack then we pop off the stack and
return to "main" parsing mode.
Ragel's existing idea of calling and returning from machines is important
here too. As this currently stands this is not actually needed, but once
heredocs are in play we will have two possible places to return to at
the end of an interpolation sequence, so the state return stack maintained
by Ragel will determine whether to return to string mode or heredoc mode.
Although this end symbol appears as just a close-brace in source, it's
worth differentiating it because the scanner must differentiate it anyway
(to recognize moving back into template-scanning mode) and it avoids the
parser from having to similarly re-recognize the difference.
On reflection, it seems easier to maintain the necessary state we need
by doing all of the scanning in a single pass, since we can then just
use local variables within the scanner function.
Using Ragel here because the scanner is going to be somewhat complex due
to the need to switch back and forth between normal and template states,
etc. This should be easier to maintain than a hand-written scanner, while
ragel gives us the extra features we need to implement things that would
normally be too complex for a "regular" scanner generator.
This means we can actually point at a column in the console without it
getting misaligned by multi-byte UTF-8 sequences and Unicode combining
characters.
This is the first non-trivial expression Value implementation. Lots of
code here, so hopefully while implementing other expressions some
opportunities emerge to factor out some of these details.
The implementation of Variables will be identical for every Expression
implementation since we just wrap our AST-walk-based "Variables" function
to do the work.
Rather than manually copy-pasting the declaration for each expression
type, instead we'll generate this programmatically using "go generate".
This will need to be re-run each time a new expression node type is
added, in order to make it actually implement the Expression interface.
This function is effectively the implementation of Variables for all
expressions, but unfortunately we still need to declare a wrapper around
it as a method on every single expression type.
This package will grow to contain all of the gory details of the native
zcl syntax, including it AST, parser, etc. Most callers should access
this via the simpler API in the top-level package, which then gives
automatic support for other syntaxes too.
Traversals are a generalized way to talk about paths taken from the scope
and from arbitrary values. These will be used for various analysis tasks,
such as determining what needs to be placed into a scope.
Eventually zcl will have its own native template format that we'll use
by default, but for applications migrating from HCL/HIL we can instead
parse strings as HIL templates, for compatibility with how JSON configs
would've been processed in a HCL/HIL app.
When this mode is not enabled, we still just treat strings as literals,
pending the implementation of the zcl template parser.
Currently only deals with the literal HCL structure. Later we will also
support HIL parsing and evaluation within strings, to achieve parity with
existing uses of HCL/HIL together. For now, this has parity with uses of
HCL alone, with the exception that float and int values are not
distinguished, because cty does not make this distinction.
This is intended as a backward-compatibility interface, allowing
applications that previously used HCL/HIL to adopt zcl while still being
able to parse their old HCL/HIL-based configuration file formats.
When producing diagnostics about missing attributes or bodies it's
necessary to have a range representing the place where the missing thing
might be inserted.
There's not always a single reasonable value for this, so some liberty
can be taken about what exactly is returned as long as it's somewhere
the user can relate back to the construct producing the error.
Expressions can now be evaluated within an "EvalContext", which provides
the variable and function scopes. The JSON implementation of this
currently ignores the context entirely and just returns literal values,
since we've not yet implemented the template language parser that would
be needed for the JSON parser to properly support expressions.
The Content and PartialContent methods deal with the case where the caller
knows what structure is expected within the body, but sometimes the
structure of a body is just a free-form set of attributes that the caller
needs to enumerate.
The idea here is that the block in question must contain only attributes,
and no child blocks. For JSON this just entails interpreting every
property as an attribute. For native syntax later this will mean
producing an error diagnostic if any blocks appear within the body.
When using the parser to do static analysis and editor integrations, it's
still useful to be able to get the incomplete AST resulting from a parse
error.
Callers that intend to use the returned value to take real actions (as
opposed to just analysis) must check diags.HasError() to determine if
the returned file can be considered valid.
Its implementation calls into each of the child bodies in turn and merges
the result to produce a single BodyContent. This is intended to support
the case of a directory being the unit of configuration rather than a
file, with the calling application discovering and parsing each of the
files in its workspace and then merging them together for processing as
a single configuration.
We need to be careful to keep straight the distinction between JSON
properties and zcl body attributes here, since properties can represent
both attributes _and_ blocks.
This is a wrapper around Body.PartialContent that generates additional
error diagnostics if any object properties are left over after decoding,
helping the config author to catch typos that would otherwise have caused
a property to be silently ignored.
Even if errors were encountered during parsing, it is helpful to still
return a partial AST if possible since that allows for the source code
analysis features to still (partially) work in the face of errors.
The new "Nav" member on a zcl.File is an opaque object that can be
populated by parsers with an object that supports certain interfaces
that are not part of the main API but are useful for integration with
editors and other tooling.
As a first example of this, we replace the hack for getting context in
the diagnostic package with a new ContextString interface, which can
then be optionally implemented by a given parser to return a contextual
string native to the source language.
The indication of specific characters in the source code that are in
error is not yet implemented, but this gets at the main functionality
of printing diagnostics.
Some applications treat an entire directory as a configuration, merging
the configurations from all of the files in a directory and treating them
as one.
MergeFiles supports this idea by wrapping the bodies of the several files.
It's not yet implemented here, but once implemented it will act as an
aggregator of the content of the wrapped bodies, delegating to them for
actual body content and then merging the returned body content in a
well-defined way.
The term "element" is already used for an item from a collection in cty,
so we'll use "block" to talk about the nested blocks in a zcl config to
reduce confusion.