Age | Commit message (Collapse) | Author |
|
Use a better formatting style.
|
|
Fix the broken Makefiles.
|
|
I think I should make a new sub-crate dedicated to the tokenization
process instead.
|
|
Try to fix some minor issues.
|
|
* chain/src/item/default/printer.lldb:
* chain/src/item/default/printer.py: These are for experimenting with
debugger supports.
|
|
* chain/src/item/default/mod.rs:
* graph/src/labelled/binary.rs:
* graph/src/labelled/double.rs:
* graph/src/lib.rs: If we set the option "ordering" to be "out" in the
declaration of nodes at the beginning, then GraphViz will not change
the order of children out of nodes. This is much better looking in
my opinion.
* INSTALL: make insists in changing this file, so let it be.
|
|
I have fixed another bug and think that the version of a more stable
version is worth bumping the versions for.
|
|
Add an intentionally ambiguous grammar for testing purposes.
It seems to work fine.
|
|
* nfa/src/default/regex.rs: Previously when merging regular
expressoins, only the graphs are merged, but the `types` array
stayed unchanged. This caused errors of indices being out of
bounds.
|
|
Previously the errors emitted while reading abnf grammars reported
incorrect indices. Now this is fixed.
|
|
This makes it easier if I want to debug things.
|
|
|
|
|
|
This bumping of version is insignificant. I just find it notable that
I seem to finally obtain a version without trivial bugs. Hooray!
|
|
The chain-rule machine needs a place-holder node at the beginning.
But afterwards that node is pure annoyance and disturbs the
functioning of the machine. Consequently I removed that node whenever
the right time comes.
This seems to fix some other bugs. It is reasonable: the presence of
that bogus node is just noise to the machine and error-prone.
|
|
* chain/src/atom/default.rs (print_virtual): Previously printing
virtual nodes is done inside the function `print_nfa`; now this is
decoupled and thus more flexible.
|
|
* chain/src/item/genins.rs: The absolute path is too long and
unnecessary.
|
|
* grammar/src/label.rs (set_end_option): This function replaces the
old function `open_end`, as this new function is more general than
the old one, and there is no specific situation where we only need
to open the end of a node without the need to close the node in an
`if` statement.
|
|
* chain/src/item/default/splone.rs: Previouslt the function
`split_node` used to split the parents of splitted nodes by an ugly
logic. Now that is moved into a dedicated function, which properly
handles the splitting of parents, including the case when the new
node is open whereas the old node is closed, in which situation we
ought to put the new node under the opened node only, as a closed
node cannot contain an open node as a child by definition.
|
|
Huh.
|
|
* graph/src/lib.rs: Add a plan to reduce the number of
bounds-checking. Hopefully this makes the package more efficient.
|
|
* chain/src/atom/default.rs: Making this function public means I do
not have to worry about it being unused.
|
|
Previously a virtual fragment did not receive proper ending positions.
This is now fixed.
Additionally, after this fix, the function `set_pos` is only called
with the last parameter set to `t`. Maybe I shall remove this
parameter.
|
|
This bug caused a plain unambiguous grammar to become ambiguous.
Funnily enough, this bug revealed a lot of bugs in the code for
handling forests. I guess this is an unexpected surprise. :D
|
|
The function `set_pos` is kind of subtle and its behaviour needs a
unit test so that we can be sure that it does not accidentally set the
ending positions in a careless manner.
|
|
Previously when generating a fragment of the forest corresponding to
the expansion of a non-terminal by a terminal, we incorrectly set the
end of every node within it to be one plus the start, if the expansion
happens due to a reduction.
Now this mistake is fixed and the ending positions are correctly set.
|
|
* chain/src/atom/default.rs (print_nullables): This functions prints
the nullables nodes of an atomic language. This is useful whe
designing unit tests, as it enables us to know which rule positions
are considered accepting by the underlying testing atomic language.
|
|
* chain/src/item/default/splone.rs: Previously when we split nodes, we
always clone the parent if the labels differ. This turns out to be
incorrect if the new label is open whereas the old label is closed.
In that case, the old parent should not contain the new node as a
child, as a closed node should not contain an open node.
I am not yet entirely sure this fix is correct, so more test await
us.
|
|
This is not of much use right now, but can be helpful later.
|
|
In the process of splitting, cloning, and planting the forest, I
forgot to check whether some cloned node of the node inquestion
satisfy the condition. This used to cause forests that violate some
fundamental assumptions. Now this is supposed to be fixed, but more
tests await us.
|
|
* src/test.c: input is a malloc'ed pointer, which can be NULL due to
malloc not being able to allocate enough memory. So I have to guard
against this possibility.
Aside: Why are some intermediate files added again?
|
|
|
|
|
|
Those were added by accident.
|
|
|
|
Previously the functions `is_prefix` and `plant` did not take the
situation of packed nodes into considerations. That was because I
only dealt with non-packed nodes in the past: the fragment to test for
prefixes and for planting did not intersect the packed nodes in the
forest, and the grammar is so simple that the fragments do not contain
packed nodes.
Then a test revealed this situation, so I have to fix this lack of
considerations now. This commit attempts to fix this issue.
From the newly added unit-tests, it seems that this fix works. :)
|
|
I do not use a tool to automatically format the codes, so sometimes
the codes look ugly. This commit reformats the codes so that they
look better and shorter on each line.
|
|
Now the binding part is finished.
What remains is a bug encountered when planting a fragment to the
forest which intersects a packed node, which would lead to invalid
forests. This will also cause problem when planting a packed
fragment, but until now my testing grammars do not produce packed
fragments, so this problem is not encountered yet.
I am still figuring out efficient ways to solve this problem.
|
|
Adding a grammar and a document for testing purposes.
|
|
Add more directories under control of autotools.
|
|
There were two main issues in the previous version.
One is that there are lots of duplications of nodes when manipulating
the forest. This does not mean that labels repeat: by the use of the
data type this cannot happen. What happened is that there were cloned
nodes whose children are exactly equal. In this case there is no need
to clone that node in the first place. This is now fixed by checking
carefully before cloning, so that we do not clone unnecessary nodes.
The other issue, which is perhaps more important, is that there are
nodes which are not closed. This means that when there should be a
reuction of grammar rules, the forest does not mark the corresponding
node as already reduced. The incorrect forests thus caused is hard to
fix: I tried several different approaches to fix it afterwards, but
all to no avail. I also tried to record enough information to fix
these nodes during the manipulations. It turned out that recording
nodes is a dead end, as I cannot properly syncronize the information
in the forest and the information in the chain-rule machine. Any
inconsistencies will result in incorrect operations later on.
The approach I finally adapt is to perform every possible reduction at
each step. This might lead to some more nodes than what we need. But
those are technically expected to be there after all, and it is easy
to filter them out, so it is fine, from my point of view at the
moment.
Therefore, what remains is to filter those nodes out and connect it to
the holy Emacs. :D
|
|
Generally speaking the algorithm now works correctly and produces the
right shape of forest for the test ambiguous grammar as well. It does
not correctly perform the "reductions". It seems that I deliberately
disabled this part of the functionalities in a previous debugging
tour.
So I have to enable it again and see if it works.
|
|
|
|
I should have staged and committed these changes separately, but I am
too lazy to deal with that.
The main changes in this commit are that I added the derive macro that
automates the delegation of the Graph trait. This saves a lot of
boiler-plate codes.
The second main change, perhaps the most important one, is that I
found and tried to fix a bug that caused duplication of nodes. The
bug arises from splitting or cloning a node multiple times, and
immediately planting the same fragment under the new "sploned" node.
That is, when we try to splone the node again, we found that we need
to splone, because the node that was created by the same sploning
process now has a different label because of the planting of the
fragment. Then after the sploning, we plant the fragment again. This
makes the newly sploned node have the same label (except for the clone
index) and the same children as the node that was sploned and planted
in the previous rounds.
The fix is to check for the existence of a node that has the same set
of children as the about-to-be-sploned node, except for the last one,
which contains the about-to-be-planted fragment as a prefix. If that
is the case, treat it as an already existing node, so that we do not
have to splone the node again.
This is consistent with the principle to not create what we do not
need.
|
|
|
|
* DIARY: Added a diary that might serve as a record of my thoughts.
|
|
The macro `graph_derive` can automatically write the boiler-plate
codes for wrapper types one of whose sub-fields implements the `Graph`
trait. The generated implementation will delegate the `Graph`
operations to the sub-field which implements the `Graph` trait.
I plan to add more macros, corresponding to various other
graph-related traits, so that no such boiler-plate codes are needed,
at least for my use-cases.
|
|
Finished the function of performing extra reductions.
Still untested though.
|
|
In the chain-rule machine, we need to skip through edges whose labels
are "accepting", otherwise the time complexity will be high even for
simple grammars. This implies that we will skip some "jumping up" in
the item derivation forest. So we need to record these extra jumping
up, in order to jump up at a later point.
This Reducer type plays this role. But I still need more experiments
to see if this approach works out as I intended.
|
|
|