<feed xmlns='http://www.w3.org/2005/Atom'>
<title>rep.git/nfa, branch 0.1.2</title>
<subtitle>Parser generator for Emacs written in Rust
</subtitle>
<link rel='alternate' type='text/html' href='https://git.jsdurand.xyz/rep.git/'/>
<entry>
<title>Bump version number to 0.1.2</title>
<updated>2023-07-21T08:30:25+00:00</updated>
<author>
<name>JSDurand</name>
<email>mmemmew@gmail.com</email>
</author>
<published>2023-07-21T08:30:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.jsdurand.xyz/rep.git/commit/?id=ffb6e689d28d295733b90f2b9e184205e33f19c2'/>
<id>ffb6e689d28d295733b90f2b9e184205e33f19c2</id>
<content type='text'>
I have fixed another bug and think that the version of a more stable
version is worth bumping the versions for.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
I have fixed another bug and think that the version of a more stable
version is worth bumping the versions for.
</pre>
</div>
</content>
</entry>
<entry>
<title>regex: Merge the `types` array as well.</title>
<updated>2023-07-21T08:20:00+00:00</updated>
<author>
<name>JSDurand</name>
<email>mmemmew@gmail.com</email>
</author>
<published>2023-07-21T08:20:00+00:00</published>
<link rel='alternate' type='text/html' href='https://git.jsdurand.xyz/rep.git/commit/?id=659c2195d2c224122cc8da813bec3af46084b61b'/>
<id>659c2195d2c224122cc8da813bec3af46084b61b</id>
<content type='text'>
* nfa/src/default/regex.rs: Previously when merging regular
  expressoins, only the graphs are merged, but the `types` array
  stayed unchanged.  This caused errors of indices being out of
  bounds.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* nfa/src/default/regex.rs: Previously when merging regular
  expressoins, only the graphs are merged, but the `types` array
  stayed unchanged.  This caused errors of indices being out of
  bounds.
</pre>
</div>
</content>
</entry>
<entry>
<title>Finished the Emacs binding.</title>
<updated>2023-07-08T04:31:13+00:00</updated>
<author>
<name>JSDurand</name>
<email>mmemmew@gmail.com</email>
</author>
<published>2023-07-08T04:30:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.jsdurand.xyz/rep.git/commit/?id=9a317e56f8a6126583f7d0c431bf878d9b1fe7b1'/>
<id>9a317e56f8a6126583f7d0c431bf878d9b1fe7b1</id>
<content type='text'>
Now the binding part is finished.

What remains is a bug encountered when planting a fragment to the
forest which intersects a packed node, which would lead to invalid
forests.  This will also cause problem when planting a packed
fragment, but until now my testing grammars do not produce packed
fragments, so this problem is not encountered yet.

I am still figuring out efficient ways to solve this problem.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Now the binding part is finished.

What remains is a bug encountered when planting a fragment to the
forest which intersects a packed node, which would lead to invalid
forests.  This will also cause problem when planting a packed
fragment, but until now my testing grammars do not produce packed
fragments, so this problem is not encountered yet.

I am still figuring out efficient ways to solve this problem.
</pre>
</div>
</content>
</entry>
<entry>
<title>fixed the bugs of node duplications and left-open nodes</title>
<updated>2023-06-18T07:03:34+00:00</updated>
<author>
<name>JSDurand</name>
<email>mmemmew@gmail.com</email>
</author>
<published>2023-06-18T07:03:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.jsdurand.xyz/rep.git/commit/?id=a80db17473ff09cc72acba2c1975101e6dbedf39'/>
<id>a80db17473ff09cc72acba2c1975101e6dbedf39</id>
<content type='text'>
There were two main issues in the previous version.

One is that there are lots of duplications of nodes when manipulating
the forest.  This does not mean that labels repeat: by the use of the
data type this cannot happen.  What happened is that there were cloned
nodes whose children are exactly equal.  In this case there is no need
to clone that node in the first place.  This is now fixed by checking
carefully before cloning, so that we do not clone unnecessary nodes.

The other issue, which is perhaps more important, is that there are
nodes which are not closed.  This means that when there should be a
reuction of grammar rules, the forest does not mark the corresponding
node as already reduced.  The incorrect forests thus caused is hard to
fix: I tried several different approaches to fix it afterwards, but
all to no avail.  I also tried to record enough information to fix
these nodes during the manipulations.  It turned out that recording
nodes is a dead end, as I cannot properly syncronize the information
in the forest and the information in the chain-rule machine.  Any
inconsistencies will result in incorrect operations later on.

The approach I finally adapt is to perform every possible reduction at
each step.  This might lead to some more nodes than what we need.  But
those are technically expected to be there after all, and it is easy
to filter them out, so it is fine, from my point of view at the
moment.

Therefore, what remains is to filter those nodes out and connect it to
the holy Emacs.  :D
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There were two main issues in the previous version.

One is that there are lots of duplications of nodes when manipulating
the forest.  This does not mean that labels repeat: by the use of the
data type this cannot happen.  What happened is that there were cloned
nodes whose children are exactly equal.  In this case there is no need
to clone that node in the first place.  This is now fixed by checking
carefully before cloning, so that we do not clone unnecessary nodes.

The other issue, which is perhaps more important, is that there are
nodes which are not closed.  This means that when there should be a
reuction of grammar rules, the forest does not mark the corresponding
node as already reduced.  The incorrect forests thus caused is hard to
fix: I tried several different approaches to fix it afterwards, but
all to no avail.  I also tried to record enough information to fix
these nodes during the manipulations.  It turned out that recording
nodes is a dead end, as I cannot properly syncronize the information
in the forest and the information in the chain-rule machine.  Any
inconsistencies will result in incorrect operations later on.

The approach I finally adapt is to perform every possible reduction at
each step.  This might lead to some more nodes than what we need.  But
those are technically expected to be there after all, and it is easy
to filter them out, so it is fine, from my point of view at the
moment.

Therefore, what remains is to filter those nodes out and connect it to
the holy Emacs.  :D
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix a bug of duplication from planting after sploing</title>
<updated>2023-06-02T07:00:48+00:00</updated>
<author>
<name>JSDurand</name>
<email>mmemmew@gmail.com</email>
</author>
<published>2023-06-02T07:00:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.jsdurand.xyz/rep.git/commit/?id=8486474f377faf2d800d79166a7abe6b975e3e50'/>
<id>8486474f377faf2d800d79166a7abe6b975e3e50</id>
<content type='text'>
I should have staged and committed these changes separately, but I am
too lazy to deal with that.

The main changes in this commit are that I added the derive macro that
automates the delegation of the Graph trait.  This saves a lot of
boiler-plate codes.

The second main change, perhaps the most important one, is that I
found and tried to fix a bug that caused duplication of nodes.  The
bug arises from splitting or cloning a node multiple times, and
immediately planting the same fragment under the new "sploned" node.
That is, when we try to splone the node again, we found that we need
to splone, because the node that was created by the same sploning
process now has a different label because of the planting of the
fragment.  Then after the sploning, we plant the fragment again.  This
makes the newly sploned node have the same label (except for the clone
index) and the same children as the node that was sploned and planted
in the previous rounds.

The fix is to check for the existence of a node that has the same set
of children as the about-to-be-sploned node, except for the last one,
which contains the about-to-be-planted fragment as a prefix.  If that
is the case, treat it as an already existing node, so that we do not
have to splone the node again.

This is consistent with the principle to not create what we do not
need.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
I should have staged and committed these changes separately, but I am
too lazy to deal with that.

The main changes in this commit are that I added the derive macro that
automates the delegation of the Graph trait.  This saves a lot of
boiler-plate codes.

The second main change, perhaps the most important one, is that I
found and tried to fix a bug that caused duplication of nodes.  The
bug arises from splitting or cloning a node multiple times, and
immediately planting the same fragment under the new "sploned" node.
That is, when we try to splone the node again, we found that we need
to splone, because the node that was created by the same sploning
process now has a different label because of the planting of the
fragment.  Then after the sploning, we plant the fragment again.  This
makes the newly sploned node have the same label (except for the clone
index) and the same children as the node that was sploned and planted
in the previous rounds.

The fix is to check for the existence of a node that has the same set
of children as the about-to-be-sploned node, except for the last one,
which contains the about-to-be-planted fragment as a prefix.  If that
is the case, treat it as an already existing node, so that we do not
have to splone the node again.

This is consistent with the principle to not create what we do not
need.
</pre>
</div>
</content>
</entry>
<entry>
<title>before a major refactor</title>
<updated>2023-02-27T04:36:41+00:00</updated>
<author>
<name>JSDurand</name>
<email>mmemmew@gmail.com</email>
</author>
<published>2023-02-27T04:36:41+00:00</published>
<link rel='alternate' type='text/html' href='https://git.jsdurand.xyz/rep.git/commit/?id=fbaa420ed550e9c3e7cdc09d4a8ec22bfbd782a6'/>
<id>fbaa420ed550e9c3e7cdc09d4a8ec22bfbd782a6</id>
<content type='text'>
I decide to adopt a new approach of recording and updating item
derivation forests.  Since this affects a lot of things, I decide to
commit before the refactor, so that I can create a branch for that
refactor.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
I decide to adopt a new approach of recording and updating item
derivation forests.  Since this affects a lot of things, I decide to
commit before the refactor, so that I can create a branch for that
refactor.
</pre>
</div>
</content>
</entry>
<entry>
<title>Added the functionality of split or clone.</title>
<updated>2023-02-12T04:07:34+00:00</updated>
<author>
<name>JSDurand</name>
<email>mmemmew@gmail.com</email>
</author>
<published>2023-02-12T04:07:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.jsdurand.xyz/rep.git/commit/?id=987c84f3454c687cca0efe0d471fcf00e052ecab'/>
<id>987c84f3454c687cca0efe0d471fcf00e052ecab</id>
<content type='text'>
I need more than the ability to clone nodes: I also need to split the
nodes.  Now this seems to be correctly added.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
I need more than the ability to clone nodes: I also need to split the
nodes.  Now this seems to be correctly added.
</pre>
</div>
</content>
</entry>
<entry>
<title>Finally produced the first correct forest</title>
<updated>2023-02-03T02:52:35+00:00</updated>
<author>
<name>JSDurand</name>
<email>mmemmew@gmail.com</email>
</author>
<published>2023-02-03T02:52:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.jsdurand.xyz/rep.git/commit/?id=265ff8f87dc7392fdf701f811eb2bf54d7bc6678'/>
<id>265ff8f87dc7392fdf701f811eb2bf54d7bc6678</id>
<content type='text'>
Finally the prototype parser has produced the first correct forest.
It is my first time to generate a correct forest, in fact, ever since
the beginning of this project.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Finally the prototype parser has produced the first correct forest.
It is my first time to generate a correct forest, in fact, ever since
the beginning of this project.
</pre>
</div>
</content>
</entry>
<entry>
<title>a prototype of an item derivation forest</title>
<updated>2023-01-28T02:22:57+00:00</updated>
<author>
<name>JSDurand</name>
<email>mmemmew@gmail.com</email>
</author>
<published>2023-01-28T02:17:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.jsdurand.xyz/rep.git/commit/?id=f28155105134b90fd86049c65478d307e0d8dbbc'/>
<id>f28155105134b90fd86049c65478d307e0d8dbbc</id>
<content type='text'>
It seems to be complete now, but still awaits more tests to see where
the errors are, which should be plenty, haha.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
It seems to be complete now, but still awaits more tests to see where
the errors are, which should be plenty, haha.
</pre>
</div>
</content>
</entry>
<entry>
<title>chain: a prototype is added.</title>
<updated>2023-01-20T05:48:26+00:00</updated>
<author>
<name>JSDurand</name>
<email>mmemmew@gmail.com</email>
</author>
<published>2023-01-20T05:48:26+00:00</published>
<link rel='alternate' type='text/html' href='https://git.jsdurand.xyz/rep.git/commit/?id=18d7955b7d84c00467ede38baae53f4ce1fb6908'/>
<id>18d7955b7d84c00467ede38baae53f4ce1fb6908</id>
<content type='text'>
I have an ostensibly working prototype now.

Further tests are needed to make sure that the algorithm meets the
time complexity requirement, though.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
I have an ostensibly working prototype now.

Further tests are needed to make sure that the algorithm meets the
time complexity requirement, though.
</pre>
</div>
</content>
</entry>
</feed>
