Previously operations were an enum, but we've ended up needing to store
a collection of values against each, so an operation being a pointer to
a struct feels more natural.
This in turn allows us to more easily fix the return types of the
operations, so that we don't need to do any unusual work to understand
that (for example) arithmetic always returns a number.
Previously we were detecting the exactly-one-part case at parse time and
skipping the TemplateExpr node in the AST, but this was problematic
because we then misaligned the source range for the expression to the
_content_ of the quotes, rather than including the quotes themselves.
As well as producing confusing diagnostics, this also caused problems for
zclwrite since it relies on source ranges to map our AST back onto the
source tokens it came from.
The existing round trip test was testing that we can pass through a set
of tokens through the AST entirely unmodified.
This new round-trip test passes the source through the Format function
and then checks that it has the same semantics, by evaluating it both
before and after and expecting an identical set of values.
This tweaks the number of spaces before each token to create a consistent
layout. It doesn't (yet?) attempt to add or remove line breaks or
otherwise mess with the non-space characters in the code.
The main goal here is to get things to line up. zcl syntax is simple
enough that there's not much latitude for super-weird usage, and so if
someone does manage to write something weird (asymmetric breaking of
brackets between lines, etc) we won't try to fix it here.
Our approach here is to split the input into lines and then to split
each line into up to three cells which, when present, should have their
leftmost characters aligned (eventually) by the formatter.
Previously we tolerated EOF as an alias for newline, but a file without
a newline at the end is a edge case primarily limited to contrived
examples in unit tests, and requiring it simplifies tasks such as code
generation in zclwrite, since we can always assume that every block item
comes with a built-in line terminator.
We consume both of these things primarily so that when we manipulate
the AST they will all stay together. For example, removing an attribute
from a body should take its comments and newline too.
When we're skipping comments but retaining newlines, we need to do some
slight-of-hand because single-line comment tokens contain the newline
that terminates them (for simpler handling of lead doc comments) but our
parsing can be newline-sensitive.
To allow for this, as a special case we transform single-line comment
tokens into newlines when in this situation, thus allowing parser code
to just worry about the newlines and skip over the comments.
The native parser's ranges don't include any surrounding comments, so we
need to do a little more work to pick them out of the surrounding token
sequences.
This just takes care of _lead_ comments, which are those that appear as
whole line comments above the item in question. Line comments, which
appear after the item on the same line, will follow in a later commit.
This is mainly for symmetry with the API of zclsyntax, but also later
we'll probably have a ParseExpression function that can be used to make
edits to individual expressions.
This is not yet complete, since it fails to capture the newline, line
comments, and variable references in expressions. However, it does
capture the broad structure of an attribute, along with gathering up
all of its _interior_ tokens.
We'll use TokenSeq for everything, including sequences we expect to
contain only one token, mainly for consistency but also so that our local
"parser" doesn't need to care about whether the main parser is returning
one token or many.
This now allows for round-tripping some input from bytes to tokens and
then back to bytes again. With no other changes, we expect this to produce
an identical result.
Although building a flat array of tokens is _one_ use-case for TokenGen,
this new approach means we can also use the interface to write to an
io.Writer without needing to produce an intermediate buffer.
Previously we were allocating a separate heap object for each token, which
creates a lot of small objects for the GC to manage. Since we know that
we're always converting from a flat array of native tokens, we can produce
a flat array of writer tokens first and _then_ take pointers into that
array to achieve our goal of making a slice of pointers.
For the use-case of formatting a sequence of tokens by tweaking the
"SpacesBefore" value, this means we can get all of our memory allocation
done in a single chunk and then just tweak the allocated, contiguous
tokens in-place, which should reduce memory pressure for a task which
will likely be done frequently by a text editor integration doing "format
on save".
The "writer parser" is a parser that produces a writer AST rather than
a zclsyntax AST. This can be used to produce a writer AST from existing
source in order to modify it before writing it out again.
It's implemented with the somewhat-unintuitive approach of running the
main zclsyntax parser and then mapping the source ranges it finds back
onto the token sequence to pull out the raw tokens for each object.
This allows us to avoid maintaining two parsers but also keeps all of
this raw-token-wrangling complexity out of the main parser.
Indexing is pretty fundamental and it's also non-trivial, so having this
exposed will make it easier for it to be implemented consistently across
many different callers, including within calling applications.
This can either be a traversal or a first-class node depending on whether
the given expression is a literal. This exception is made to allow
applications to conditionally populate only part of a potentially-large
collection if the config is only requesting one or two distinct indices.
In particular, it allows the following to be considered a single traversal
from the scope:
foo.bar[0].baz
stringer is more sensitive to certain errors than other generators, so
by running it last we give the other generators a chance to get things
straight before we ask stringer to run.
Traversal operators are the operators that can appear after a value
to traverse into the data structure that value represents. So far only
the attribute access operator is implemented.
Previously it was returning DynamicVal, but that's incorrect since it
would mean that even an otherwise-complete result that has an unpopulated
optional attribute would include an unknown.
This allows tooling to get a string describing the context of a particular
offset into the file. This is used, for example, to provide context
above the source code snippets in console-printed diagnostic messages.