We consume both of these things primarily so that when we manipulate
the AST they will all stay together. For example, removing an attribute
from a body should take its comments and newline too.
When we're skipping comments but retaining newlines, we need to do some
slight-of-hand because single-line comment tokens contain the newline
that terminates them (for simpler handling of lead doc comments) but our
parsing can be newline-sensitive.
To allow for this, as a special case we transform single-line comment
tokens into newlines when in this situation, thus allowing parser code
to just worry about the newlines and skip over the comments.
The native parser's ranges don't include any surrounding comments, so we
need to do a little more work to pick them out of the surrounding token
sequences.
This just takes care of _lead_ comments, which are those that appear as
whole line comments above the item in question. Line comments, which
appear after the item on the same line, will follow in a later commit.
This is mainly for symmetry with the API of zclsyntax, but also later
we'll probably have a ParseExpression function that can be used to make
edits to individual expressions.
This is not yet complete, since it fails to capture the newline, line
comments, and variable references in expressions. However, it does
capture the broad structure of an attribute, along with gathering up
all of its _interior_ tokens.
We'll use TokenSeq for everything, including sequences we expect to
contain only one token, mainly for consistency but also so that our local
"parser" doesn't need to care about whether the main parser is returning
one token or many.
This now allows for round-tripping some input from bytes to tokens and
then back to bytes again. With no other changes, we expect this to produce
an identical result.
Although building a flat array of tokens is _one_ use-case for TokenGen,
this new approach means we can also use the interface to write to an
io.Writer without needing to produce an intermediate buffer.
Previously we were allocating a separate heap object for each token, which
creates a lot of small objects for the GC to manage. Since we know that
we're always converting from a flat array of native tokens, we can produce
a flat array of writer tokens first and _then_ take pointers into that
array to achieve our goal of making a slice of pointers.
For the use-case of formatting a sequence of tokens by tweaking the
"SpacesBefore" value, this means we can get all of our memory allocation
done in a single chunk and then just tweak the allocated, contiguous
tokens in-place, which should reduce memory pressure for a task which
will likely be done frequently by a text editor integration doing "format
on save".
The "writer parser" is a parser that produces a writer AST rather than
a zclsyntax AST. This can be used to produce a writer AST from existing
source in order to modify it before writing it out again.
It's implemented with the somewhat-unintuitive approach of running the
main zclsyntax parser and then mapping the source ranges it finds back
onto the token sequence to pull out the raw tokens for each object.
This allows us to avoid maintaining two parsers but also keeps all of
this raw-token-wrangling complexity out of the main parser.
Indexing is pretty fundamental and it's also non-trivial, so having this
exposed will make it easier for it to be implemented consistently across
many different callers, including within calling applications.
This can either be a traversal or a first-class node depending on whether
the given expression is a literal. This exception is made to allow
applications to conditionally populate only part of a potentially-large
collection if the config is only requesting one or two distinct indices.
In particular, it allows the following to be considered a single traversal
from the scope:
foo.bar[0].baz
stringer is more sensitive to certain errors than other generators, so
by running it last we give the other generators a chance to get things
straight before we ask stringer to run.
Traversal operators are the operators that can appear after a value
to traverse into the data structure that value represents. So far only
the attribute access operator is implemented.
Previously it was returning DynamicVal, but that's incorrect since it
would mean that even an otherwise-complete result that has an unpopulated
optional attribute would include an unknown.
This allows tooling to get a string describing the context of a particular
offset into the file. This is used, for example, to provide context
above the source code snippets in console-printed diagnostic messages.
It no longer produces diagnostics, since that's redundant with the
diagnostics that Decode itself will produce, and it wasn't going to be
complete anyway due to our use of partial decoding and our inability
to thread through nested specs in child blocks.
This allows callers to determine the source location where a particular
value (identified by a spec) came from, for use in application-level
diagnostic messages.
There are certain tokens that are _never_ valid, so we might as well
catch them early in the Lex... functions rather than having to handle
them in many different contexts within the parser.
Unfortunately for now when such errors occur they tend to be echoed by
more confusing errors coming from the parser, but we'll accept that for
now.
This package is an alternative to gocty for situations where static Go
types are not desired and the application instead wishes to remain in the
cty dynamic type system.
This allows code that only deals with BodyContent (post-decoding) to
still be able to report on missing items within the associated body while
providing a suitable source location.