Age | Commit message (Collapse) | Author |
|
The purpose of this change is to share this trait with other crates,
such as the forth-coming "semiring" crate that will be responsible for
handling some simple semiring operations as well as the querying, in
my plans.
|
|
Previously some incorrect forest nodes will be used for planting new
nodes. I cannot fix the root cause of their presence in the
chain-rule machine. But I can ignore them when they are encountered.
Of course I would like to really prevent them from existing, but still
cannot figure out how.
|
|
* chain/src/default.rs: This is useful for debugging the chain-rule
machine.
|
|
Some bugs are fixed:
1. If a non-terminal expansion can be reduced immediately, previously
an extra node would be created that had no parents. Now this strange
behaviour is corrected.
2. When performing reductions, a leaf non-terminal node would
previously be regarded as completed. Now we will first try to
complete that node, and then determine if the completion is
successful, and finally determine the completedness according to the
result.
Of course some more tests are still pending, before I can confirm that
no more bugs lurk around.
|
|
Adjust the codes slightly.
Also add a plan to implement the context-free memoization.
|
|
The chain-rule machine needs a place-holder node at the beginning.
But afterwards that node is pure annoyance and disturbs the
functioning of the machine. Consequently I removed that node whenever
the right time comes.
This seems to fix some other bugs. It is reasonable: the presence of
that bogus node is just noise to the machine and error-prone.
|
|
* chain/src/item/default/splone.rs: Previously when we split nodes, we
always clone the parent if the labels differ. This turns out to be
incorrect if the new label is open whereas the old label is closed.
In that case, the old parent should not contain the new node as a
child, as a closed node should not contain an open node.
I am not yet entirely sure this fix is correct, so more test await
us.
|
|
In the process of splitting, cloning, and planting the forest, I
forgot to check whether some cloned node of the node inquestion
satisfy the condition. This used to cause forests that violate some
fundamental assumptions. Now this is supposed to be fixed, but more
tests await us.
|
|
Now the binding part is finished.
What remains is a bug encountered when planting a fragment to the
forest which intersects a packed node, which would lead to invalid
forests. This will also cause problem when planting a packed
fragment, but until now my testing grammars do not produce packed
fragments, so this problem is not encountered yet.
I am still figuring out efficient ways to solve this problem.
|
|
There were two main issues in the previous version.
One is that there are lots of duplications of nodes when manipulating
the forest. This does not mean that labels repeat: by the use of the
data type this cannot happen. What happened is that there were cloned
nodes whose children are exactly equal. In this case there is no need
to clone that node in the first place. This is now fixed by checking
carefully before cloning, so that we do not clone unnecessary nodes.
The other issue, which is perhaps more important, is that there are
nodes which are not closed. This means that when there should be a
reuction of grammar rules, the forest does not mark the corresponding
node as already reduced. The incorrect forests thus caused is hard to
fix: I tried several different approaches to fix it afterwards, but
all to no avail. I also tried to record enough information to fix
these nodes during the manipulations. It turned out that recording
nodes is a dead end, as I cannot properly syncronize the information
in the forest and the information in the chain-rule machine. Any
inconsistencies will result in incorrect operations later on.
The approach I finally adapt is to perform every possible reduction at
each step. This might lead to some more nodes than what we need. But
those are technically expected to be there after all, and it is easy
to filter them out, so it is fine, from my point of view at the
moment.
Therefore, what remains is to filter those nodes out and connect it to
the holy Emacs. :D
|
|
I should have staged and committed these changes separately, but I am
too lazy to deal with that.
The main changes in this commit are that I added the derive macro that
automates the delegation of the Graph trait. This saves a lot of
boiler-plate codes.
The second main change, perhaps the most important one, is that I
found and tried to fix a bug that caused duplication of nodes. The
bug arises from splitting or cloning a node multiple times, and
immediately planting the same fragment under the new "sploned" node.
That is, when we try to splone the node again, we found that we need
to splone, because the node that was created by the same sploning
process now has a different label because of the planting of the
fragment. Then after the sploning, we plant the fragment again. This
makes the newly sploned node have the same label (except for the clone
index) and the same children as the node that was sploned and planted
in the previous rounds.
The fix is to check for the existence of a node that has the same set
of children as the about-to-be-sploned node, except for the last one,
which contains the about-to-be-planted fragment as a prefix. If that
is the case, treat it as an already existing node, so that we do not
have to splone the node again.
This is consistent with the principle to not create what we do not
need.
|
|
Finished the function of performing extra reductions.
Still untested though.
|
|
In the chain-rule machine, we need to skip through edges whose labels
are "accepting", otherwise the time complexity will be high even for
simple grammars. This implies that we will skip some "jumping up" in
the item derivation forest. So we need to record these extra jumping
up, in order to jump up at a later point.
This Reducer type plays this role. But I still need more experiments
to see if this approach works out as I intended.
|
|
|
|
* chain/src/default.rs:
* chain/src/lib.rs: Add a parameter that controls whether or not the
chain-rule machine computes the item derivation forest as well.
Sometimes we only need to recognize whether an input belongs to the
grammar, but do not care about the derivations. This parameter can
speed up the machine in that case.
|
|
* chain/src/default.rs: Add a plan to fix things.
|
|
I decide to adopt a new approach of recording and updating item
derivation forests. Since this affects a lot of things, I decide to
commit before the refactor, so that I can create a branch for that
refactor.
|
|
Previously there was a minor bug: if the chain-rule machine ended in a
node without children, which node should be accepting because of edges
that have no children and hence were ignored, then since the node has
no children, it would be regarded as not accepting. Now this issue is
fixed by introducting real or imaginary edges, where an imaginary edge
is used to determine the acceptance of nodes without chidlren.
|
|
I need more than the ability to clone nodes: I also need to split the
nodes. Now this seems to be correctly added.
|
|
Finally the prototype parser has produced the first correct forest.
It is my first time to generate a correct forest, in fact, ever since
the beginning of this project.
|
|
It seems to be complete now, but still awaits more tests to see where
the errors are, which should be plenty, haha.
|
|
Now the forest can detect if a node is packed or cloned, and correctly
clones a node in those circumstances. But it still needs to be
tested.
|
|
It seems the performance is indeed linear for a simple grammar.
This is such a historical moment, for me, that I think it deserves a
separate commit, haha.
|
|
I have an ostensibly working prototype now.
Further tests are needed to make sure that the algorithm meets the
time complexity requirement, though.
|
|
I put functionalities that are not strictly core to separate crates,
so that the whole package becomes more modular, and makes it easier to
try other parsing algorithms in the future.
Also I have to figure the forests out before finishing the core
chain-rule algorithm, as the part about forests affects the labels of
the grammars directly. From my experiences in writing the previous
version, it is asking for trouble to change the labels type
dramatically at a later point: too many places need to be changed.
Thus I decide to figure the rough part of forests out.
Actually I only have to figure out how to attach forests fragments to
edges of the underlying atomic languages, and the more complex parts
of putting forests together can be left to the recorders, which is my
vision of assembling semi-ring values during the chain-rule machine.
It should be relatively easy to produce forests fragments from
grammars since we are just trying to extract some information from the
grammar, not to manipulate those information in some complicated way.
We have to do some manipulations in the process, though, in order to
make sure that the nulling and epsilon-removal processes do not
invalidate these fragments.
|