Fuzzy Mind Vault

Solo Game Jam 2

Morning, the 15th of November, 2016

The solo game jam is doing well, but not as fast as expected.

The code is currently in the "expending" phase, preceeding the "shrinking" phase. It should not be that way. The lesson learned here is to use an ECS engine. EntityX seems to be a good candidate and as soon as the game will be in its first stable state, a switch will be done.

A second Solo Game Jam will be necessary to get a fresh new start with all the new technologies acquired. The current deadline-free game jam (as it is done on free time in a very busy schedule) should end in a couple of weeks.

Solo Game Jam 1

Evening, the 8th of November, 2016

Time to speed things up.

There is a neat framework called SFML. Let's do some XBlast clone with it, targeting Monday as a release date.

Flocking engine is still on its way. It needs polishing, testing and of course, some relevant examples.

Flocking all the way

Late, the 7th of November, 2016

Following the search tree fun times, a repository called headless-logic has been created. It aims to create a collection of tool to ease solving computational problems. And it's also for fun.

After genetic algorithm minimal framework and embryo of single layer feedback neural network, let's return to something that will be useful anytime soon : Managing flocking behavior in order to simulate constrained, mixed goal crowd. The starting point is "Flocking for Multi-Agent Dynamic Systems : Algorithms and Theory" by Reza Olfati-Saber. This paper is terrific but a bit hard to read if you lack some high level mathematical background.

By the way, the goal here is to assimilate the theory in order to create a framework whose concepts implementations lead to one or another model of flocking. A lot of work to do.

Coming back and enjoying the simplicity

Warm afternoon, 10th of July, 2016

If winter wasn't very productive time, the following month were a disaster. The framework isn't maintained anymore, but skills have been earned about Android and broad architecture patterns.

Holidays are almost there. It's not the best moment for productivity too. So, how can one by effective when having a lot of distractions (damn you, Dwarf Fortress !), family life and mind confusing medicine ?

Hope and inspiration comes from projects like kilo. Powerful code, less than 1k line of code, no dependency but to the core libraries, nice features. At first glance, the code is somehow nice. A code review hasn't been done yet. Perhaps it could be a good idea. But a better idea could be to make a simple yet powerful multiplayer game with core library and little code.

Hiatus and starting over

Sometime, between 29th and 30th of December, 2015

Winter is not a very productive time. It has been tough for the body and the mind. So, no big progress since then. However, the search tree has been included in the framework as part of the new Dumb Logic module (this name sounds good). Further addition in the framework will concern declaration and definition of concepts as macros (as well as their documentation, of course).

A simple and yet effective method has been added to the search tree : Move. It permits to change the key of an element within the tree and avoid useless deletion/creation of nodes (with all the implied overhead). As stress testing is no longer useful, this will be extensively live tested.

Some last improvement before live testing

Late, the 29th of November, 2015

It might not seem usefull. However, last tests on search tree show that heap management and method calls were the main CPU consumer. So, why not work this way ?

And, in fact, the performance improvements were met. However, the "stress test" is not that good (meaning that the test software isn't well conceived enough to be insightful). By the way, the results are satisfying enough. No fancy charts this time.

Next move is live testing. The objective is to simulate a crowd of hundreds of agents at 60Hz which might be achievable given the last results of the search tree implementation. Code polishing will be needed too. It's not (by far) the best piece of code ever wrote. As it is C++11, it needs some protections, for instance, explicit keyword and so on.

Stress Test

Morning, the 15th of November, 2015

It's time to stress test the search tree. It has many flaws. One of them being the fact that it hosts pointers to const elements, making difficult to work with retrieved data (except by doing an evil const cast, but it's not a really good thing at all). A unit test shall be a good but hard thing to do considering that the code behaviour heavily depends on the validity of the concept implementation (element returning inconsistent key, incorrect region partitionning, inconsistent search concept implementation and so on).

One should have noticed in the provided code that a visiting method has been added. It allows inspecting the search tree. Again, a visitor is a concept (the fifth one for this code) with a clearly defined concept. This will be discussed later but will be usefull for the stress test as it could check the tree state.

template <typename K, typename R, typename E>
template <typename V>
void Node<K, R, E>::visit(V &visitor) {
    visitor.enter(_region);
    if(_nodes != 0) {
        for(unsigned int i = 0; i < _count; ++i) {
            _nodes[i]->visit(visitor);
        }
    } else {
        visitor.inspect(_elements, _count);
    }
    visitor.exit();
}

First thing to check : memory leak. After passing the stress test into valgrind, good news : no leak at all (it was a bit predictible).

==4144== HEAP SUMMARY:
==4144==     in use at exit: 0 bytes in 0 blocks
==4144==   total heap usage: 19,749 allocs, 19,749 frees, 786,034 bytes allocated
==4144== 
==4144== All heap blocks were freed -- no leaks are possible
==4144== 
==4144== For counts of detected and suppressed errors, rerun with: -v
==4144== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

The stress test itself consist in initialising a pool of element, fill the tree with it and then perform the following :

With X and Y large enough to be significant and with different node cardinality and number of elements in the tree. The test will be done with 2D keys, rectangle partioning (quad-tree) and rectangle region search.

First, we test the Remove/Change/Add operation. On the following chart is displayed the elapsed time in nanoseconds for one element against the number of elements in the tree. The four datasets correspond to tree with node cardinality at 8, 16, 32 and 64 (from lighter to darker).

It obviously appears that cardinality is a plus here. But will it be the same for search query ?

Again, data are shown for element count from 128 to 16384 and node cardinality from 8 to 64 (from light to dark). As expected, there's an attenuation on element count. The node cardinality does not seem to be relevant. Hence, let's compare search for the node cardinality. The following compares trees containing 128 to 16384 elements, node cardinality at 32 elements per node, and search region range from (8x8) to (128x128) (hence, 0.8% to 12% of the area).

No surprise again. The performance is stable for small areas and is highly degraded for wider areas as the number of elements increase.

All these results are expected for this kind of search tree. Concerning the performances (shown in nanoseconds per operations in the previous charts), they're acceptable considering that the test target is an old Intel Q6600 tuned to be silent. However, it's not significant as it concerns evenly distributed elements and doesn't match with flocking topology. These live tests will be the subject of another post.

Lowering the Search Tree Abstraction

Midday, the 13th of November, 2015

Yesterday approach of "generic" search tree was still a bit naive. In fact, the only kind of search was a circular/spherical one (understand, one key and a distance from it). Plus there was a need of a distance function that was host by the Region concept. And that, my friends, is BAD.

A more adequat way to perform a search in such a tree is to delegate acceptance criteria to another object. Hence, a new concept. Let's call it Search and define it this way:

/**
 * Search Concept.
 * @param <K> Key concept.
 * @param <R> Region concept.
 */
template<typename K, typename R> class S {
    public:
        /**
         * Test region for intersection OR containment.
         * @param region Region to test.
         * @return A negative value is the region doesn't not contains nor intersects the region,
         * null value if it intersects it,
         * a position value if it fully contains the region.
         */
        int contains(const R& region);

        /**
         * Test key containement.
         * @param key Key to test.
         * @return 'true' if it contains the key.
         */
        bool contains(const K& key);
};

On a side note, the key containment method was forgotten for the region concept. This will be corrected/added in the full code.

The weird return value of region testing method will help for optimization. If a region is fully contained by the search function, we just have to add all its elements or the one of all its sub-regions. So, we change the retrieval method by the following in our search tree node:

        /**
         * Retrieve elements with a certain distance from the
         * specified key.
         * @param func Search function.
         * @param buffer Storage for eligible elements.
         * @param size Size of the buffer.
         * @param <S> Search function type.
         * @return Number of eligible elements. If there's more
         * eligible elements than 'size', return the negative value.
         */
        template <typename S> unsigned int retrieve(const S& func,
                const E** buffer, unsigned int size) const;

As we've "optimized" the code to fetch an entire sub-tree as a result, we need a private method to do so:

    private:
        /**
         * Fecth the entire content of the tree.
         * @param buffer Array in which to fetch elements.
         * @param size Size of this array.
         * @return Number of retrieved elements.
         */
        unsigned int fetch(const E** buffer, unsigned int size) const;

No black magic here. Next, no surprise : everybody knows how to split/merge nodes in a N-ary tree. Let's focus on the search method.

template <typename K, typename R, typename E>
template <typename S>
unsigned int Node<K, R, E>::retrieve(const S& func, const E** buffer, unsigned int size) const {
    unsigned int result;
    // Here comes the fun.
    if(_elements != 0) {
        // We're in a leaf. There're two cases that leads here :
        // 1. This leaf intersects with the search function.
        // 2. This leaf is the root node and might not be relevant ...
        // In all case, we must confront all the elements to 'func'.
        unsigned int max = size < _count ? size : _count;
        const E** cur = _elements;
        const E** dest = buffer;
        result = 0;
        for(unsigned int i = 0; i < max; ++i, ++cur) {
            if(func.contains((*cur)->key())) {
                *buffer = *cur;
                ++buffer;
                ++result;
            }
        }
    } else {
        // We're in a node.
        // Let's test all the subs against the 'func'. In some cases,
        // fetch the whole sub-tree, in other cases, just recurse the retrieval.
        const E** dest = buffer;
        Node<K, R, E>** cur = _nodes;
        unsigned int remaining = size;
        unsigned int retrieved;
        int intersects;
        for(unsigned int i = 0; i < _count; ++i, ++cur) {
            intersects = func.contains((*cur)->_region);
            if(intersects >= 0) {
                if(intersects != 0) {
                    retrieved = (*cur)->fetch(dest, remaining);
                } else {
                    retrieved = (*cur)->retrieve(func, dest, remaining);
                }
                remaining -= retrieved;
                dest += retrieved;
            }
        }
        result = size - remaining;
    }
    return result;
}

The full code is available here. The next update, beside some optimization tweaks, is the usage of a custom allocator for data recycling. You may notice some new and delete when adding and removing elements from the tree. This could lead to memory fragmentation and one knows that this is a performance pitfall. It will be the subject of a next post.

Abstract N-Ary Tree Search

Afternoon, the 12th of November, 2015

Here is an interesting reading adapted to the current problem. However, the Java implementation relies on Java API (and also suffer some minor issues ...). However, concepts developped in the paper 'Non-blocking k-ary Search Trees' are very relevant. Someday, one should take the time to make a self-sufficient implementation of it in C1x.

There's a lovely thing about Java : The generics. It's something that C11 lacks (C14 somehow has it and C17 will have it, perhaps). It's always enjoyable to resolve most of the design/coding issue at compile time and get rid of fake flexibility (remember kids: know your data !).

/**
 * Search Tree.
 *
 * Given that an element can be associated to a (non-unique) key,
 * Given that distance between two keys can be computed and expressed as a scalar,
 * it handles the following operations:
 * - Add an element given a key.
 * - Remove an element (by instance or by key).
 * - Find elements within the distance from a key.
 * @param <K> Key concept. It must come with the method 'double distance(const K&)'
 * @param <E> Element concept. Must expose the following method : 'const K& key()'
 */
template <typename K, typename E> class SearchTree {
    public:
        /**
         * Add an element.
         * @param element Pointer to the element to add.
         * @return 'true' if the element has been added.
         */
        bool add(const E* element);
        /**
         * Remove an element.
         * @param element Pointer to the element instance to remove.
         */
        void remove(const E* element);
        /**
         * Retrieve elements with a certain distance from the
         * specified key.
         * @param key Key.
         * @param distance Distance from the key.
         * @param buffer Storage for eligible elements.
         * @param size Size of the buffer.
         * @return Number of eligible elements. If there's more
         * eligible elements than 'size', return the negative value.
         */
        int retrieve(const K &key, double distance,
                E **buffer, int size) const;
};

As usual, the code is struggling between full reference or full pointer usage. Usually, it goes to full pointer. But that's not the important here.

The previous 'SearchTree' implementation has the same three operations (which is funny as its conception predates the reading of the document cited earlier) : Add an element, remove an element, retrieve a set of elements from a key and a distance.

Just to show how generics in Java should have been handy :

public class SearchTree<K extends DistanceKey, E extends Identifiable<K>> {
            public boolean add(E element) { /* ... */ }
            public void remove(E element) { /* ... */ }
            public void retrieve(K key, double distance, Set<E> buffer) { /* ... */ }
}

The only difference here resides in contracts implied by the generics declaration. This lack of contract can be a huge pitfall in C1x for any beginner using a poorly documented library (or a beginner who doesn't read documentation ...). Of course, Java code bloat is not evoked here for clarity sake.

Well, there's still some concept missing. We're talking about N-ary (or k-ary) search tree. Hence, given the specified key type, we must determine the degree of the N-ary tree. Let say that we have a glm::vec2, we'll play with quad-tree and with glm::vec3, it will be an oct-tree. That's pretty obvious for us. Not for a dumb library.

First thing first, there's no mention of tree node yet. There's stategies that consist in saying that a tree is represented by its root-node. Let's stick with that one. We'll need to define the Region concept. As its name implies, a region defines a subset of key and also a way to divide itself in sub-regions. So, we got :

#define DEFAULT_CARD 16
/**
 * Search Tree Node.
 *
 * Given that an element can be associated to a (non-unique) key,
 * Given that distance between two keys can be computed and expressed as a scalar,
 * it handles the following operations:
 * - Add an element given a key.
 * - Remove an element (by instance or by key).
 * - Find elements within the distance from a key.
 * @param <K> Key concept. The distance computation is assumed by
 *     the provided Region instance.
 * @param <R> Region concept. The unfamous one. Must implement the following methods:
 *     Provide the cardinality of region subdivision.
 *         unsigned int dimension() const;
 *     Compute the distance between two keys WITHIN the region.
 *         double distance(const K&, const K&) const;
 *     Divide the region (by providing an in/out parameter to store result).
 *         void divide(R*) const;
 *     Assign operator, copy constructor and so on. There will be copies.
 * @param <E> Element concept. Must expose the following method : 'const K& key()'
 */
template <typename K, typename R, typename E> class Node {
    public:
        /**
         * Constructor.
         * At creation, the node is a leaf and does not contains
         * elements.
         * @param region A node is defined for a particular region key.
         * @param cardinality Maximum number of stored elements.
         */
        Node(const R region, unsigned int cardinality = DEFAULT_CARD);

        // Note: add, remove and retrieve are declared exactly as before ...
        // Note: destructor not shown as it'll obviously be defined.
    private:
        /** Region of interest. */
        R               _region;
        /** Stored elements. 'null' if not leaf. */
        E**             _elements;
        /** Element count. */
        unsigned int    _count;
        /** Maximum number of elements or node count. */
        unsigned int    _cardinality;
        /** Sub-node. 'null' if leaf. */
        Node<K, R, E>** _nodes;
};

This post will never end ...

Lines 37 to 46 itch ? The fact that _cardinality has two meaning ? The fact that a node contains both leaf and non-leaf data ? The fact that data are used for determining node nature ? All of that ? So, find a way to do this without subtyping or convincing arguments that subtyping footprint is lower than this solution.

Lines 13 to 20 define the Region concept. As you can see, we don't need to do more than this outside instanciable search tree node implementation.

Beside the implementation itself, the only remaining topic should be the description and usage of a recycling strategy for nodes in order to avoid heap fragmentation. But this is early optimisation and won't provide anything good. There's a chance for it to be a default type parameter and a default constructor parameter dragged all the way between nodes. But this will be the subject of another -long- post.

Coloring and Searching

Mid-Morning, the 12th of November, 2015

What was meant to be a lightweight blog is growing. A new script has been added for syntax highlighting. It will help in case of code demonstration. After some debate, PrismJS was chosen. Here's an example.

#include <glm/glm.hpp>
template <typename T> void Locator<T>::remove(T *element) {
    if(element != 0) {
        glm::vec2 position = element->position();
        List <T*> *list = locate(position);
        if(list != 0) {
            list->remove(element);
        }
    }
}

Nice isn't it ? By the way, the example code comes from a work-in-progress. It's a simple object locator for neighboring search. It's planned to be part of the Dumb Logic in the framework.

Adding Scrips

Morning, the 1st of November, 2015

There's a cool library out there called KaTeX. It aims at including LaTeX style formulae into page like this. Here is an example:

However, TONS of the latex capabilities aren't supported yet. Maybe because it doesn't generate a real raster or SVG. Got to check its code. It should be enough for the concepts developped here.

About new concepts, yesterday was evoked a proof of concept. Today, it will be implemented. There was an issue about a NN-Search algorithm. Finally, a semi-naive one will be coded to test its efficiency.

Clean Up and Start Anew

Late Morning, the 31st of October, 2015

Thanks to bikeshed's script purge-old-kernels, the system was able to update its kernel. This script is an example of beautiful/effective coding.

The simple game in making was a dead-end. It's time to start anew. Not really a game, but a proof of concept.

Where's Time ?

Very Early, the 25th of October, 2015

Permanent job is very time consuming. And painkillers don't help at being productive. They tend to make you procrastinating. But things are progressing slowly.

Since last post, long time ago, a text engine has been added to the framework. Nothing very exiting. It's been a while since the last update of it. It starts to rust.

Talkin' bout rust, there's a mean, interesting and yet very powerful language out there ! Rust. It's for real coders and architects. You've got to THINK before doing anything, and you've got to do it right 'till the very beginning. There's already an engine called Piston. Nice one.

So ! Is the framework a dead end ? After a bit of reflexion, the answer is a no. One need some robust engine to quickly start a project in a language which is not for the elite (Rust is, C++11 -framework's one- isn't).

And don't you dare talking about Java or any managed language !

A simple game is in the making. It's time to train for the next Ludum Dare of December and it's time to make another test case for the framework.

Long Time No See, hmm ?

Late, the 27th of May, 2015

Today, the two weeks contribution streak of the github account will be over. In fact, yesterday doesn't count : It was just an issue opening. The current task, consisting in the sprite engine refactoring is a bit time consuming and can't be resolved in two or three hours.

A side/homemade project like this little framework is a relief. As a coder, it's a grasp of fresh air. It represents a lot of experimentations, failures but in the end, learning. And not only about programming languages or technologies, but also in methodoly (and the required discipline - especially when there's a family to take care of).

All of this is part of a plan. Previous iterations of this blog told about it: First, create a toolbox, then, use it for some experimentations, simple games, and finally, illustrate decades of research on autonomous agents.

Finally, what can you expect from this blog ? It's a static page, with no interactivity (for now). It will contains, as usual, side notes about progresses and findings concerning active projects (algorithms, codes, links to papers or interesting projects/libraries).