Nodes

Nodes are a part of the bibite's brain. They are analogous to the neurons of an animal's brain. Nodes hold a value that represents their activation level, and are able to stimulate other nodes through synapses, representing synaptic connections, that takes the activation level of a node and uses it to stimulate another. Nodes exist as 3 main types:

Input nodes: Representing the senses of the bibites, either internal (own state) or external (sensing the environment).

Output nodes: Used for all the actions the bibites can execute and all the internal processes they have control over.

Hidden nodes: Intermediary nodes that don't have a physical function, but that can be used to further process the signals from the input nodes before transmitting their own signal further in the propagation's chain.

'' Note : It is eventually planned to expand their use so that they would be able to represent a larger amount of analogs like physical characteristics, or hormone levels, but it is presently not the case. ''

Stimulation
Nodes are stimulated by other nodes through synapses.

The stimulation value from a particular connection is equal to to stimulating node's activation level, multiplied by the synapse's strength.

As an examples, if a connection with a strength of -1.5 connects a node with an activation level of 1.1 to another node, that receiving node will experience a stimulus of -1.65 from that connection.

Stimulation Accumulation
For now, neurons only accumulate signals through summative accumulation, meaning that the total stimuli they perceive is the sum of all the individual stimulations they receive. It is however planned to eventually include multiplicative accumulations.

As an example, if a node receives 3 stimulation of the following values: -1.2, 2.5, and 0.6, it's total stimuli will be 1.9

Stimulation Range
The stimulation range of a node represents the complete range of possible total stimulation it could potentially receive.

As an examples, if a node receive the two following connections:

-A first connection of strength 2.1 coming from a node with a possible output range of [0:1]

-And a second connection of strength -3.5 coming from another node with a possible output range of [-1:1]

Would result in a node with a stimulation range of [-3.5 : 5.6]

This can be an useful tool when studying the possible dynamics and behaviors that a particular network can produce.

Activation Functions
Except for input nodes, which can't receive synaptic stimulations from other nodes (their value being set by their respective systems), each neurons have an activation function defining how it responds to stimuli.

A node's total stimuli is passed through the activation function of the node to determine its resulting activation level.

Sigmoid Nodes (SIG):
The Sigmoid function is the default activation function of most nodes.

It is very popular in the field of artificial intelligence. Its activation value is bounded between 0 and 1 and is therefore very useful for nodes that represent something where a value outside of theses bounds wouldn't make much sense.

Its default value of 0.5 means that it'll still present a signal when the node doesn't receive external stimulations.

It is then best used when it represents a state or desire that should have some activation by default.

Linear Nodes (LIN):
The Linear function is as simple as you can get.

It simply output the total stimulation it receives.

However, in order to prevent processing complications, its output value will still be capped between -100 and +100, still leaving a more than reasonable window of activation.

It is best suited for states that can range from very low values to very high values.

Hyperbolic Tangent Nodes (TanH):
The Hyperbolic Tangent function is also very popular as an activation function in the field of Artificial Intelligence.

It displays a similar shape to the Sigmoid function, but instead ranges for -1 to 1, resulting in a default output of 0 when not stimulated.

As a result it's a more generic function that can have a wider range of use.

It is best suited for states and desires that can have a negative value translating to a meaning that make sense.

Sine Nodes (SIN):
The sine function is pretty straightforward. It takes in a signal and makes it periodic.

If strongly stimulated ( [0 : 10+] ), a node using this activation function can create a periodic behavior to an input signal.

If the stimulation range is a little smaller ( [0 : ~3] ), it can be used as an optimizer, where a value just high enough would produce an output of 1, and decrease if the signal kept increasing.

If the stimulation range is small enough ( [ >-1 : <1 ] ), than the node will produce signals similar to a linear node.

Rectified Linear Nodes (ReLU):
Rectified Linear Units are another popular choice as an activation function in the field of Artificial Intelligence.

This Activation function is very similar to the Linear Function, but is capped in the negative range.

However, in order to prevent processing complications, its output value will be capped at +100, still leaving a more than reasonable window of activation.

It is best suited for states that can range from 0 to very high positive values.

Gaussian Nodes (GAU):
The Gaussian function follows a bell shape. Its default value is 1.0 when not stimulated, and any stimulation, either positive or negative, will tend to decrease it's activation output.

This allows nodes using this function to act as "inverters".

It's also possible to use this node as a "range selector" if it's also stimulated by the constant node in addition to a real dynamic signal. As a connection to the constant node would effectively serve as an "offset", allowing the other(s) signal(s) to have a different value that would produce a maximum output.

'' Disclaimer: I know this is not really a "Gaussian" function, but it resemble the shape enough that it works similarly while being easier to compute. ''

Latch Nodes (LAT):
The Latch Function is the first non-linear function (non-linear function in the sense that the same stimulation will not always produce the same result).

Basically, the Latch Nodes hold an internal state, which is what it outputs instead of a transformation of the input stimulation.

If the total stimulation is above to 1.0, the node will set that internal value to 1.0, and if the total stimulation is below 0.0, it will set that internal value to 0.0. It will keep outputting the last value that was set when none of those conditions are met (between 0.0 and 1.0).

As such, the Latch Function can be used as a memory unit, where a stimulation above 1.0 acts as the "set", and a negative stimulation acts a the "reset".

The Latch nodes can also be useful to describe states that are either on or off, with no intermediary values.

Differential Nodes (DIF)
If you went through calculus, than this node is pretty straightforward.

It outputs the variation rate of its total perceived stimulation and is normalized across different time speeds.

As some signals can vary very quickly, the node's output have been capped between -100 and +100 to prevent complications.

It can be an useful tool as it can be used to determine the rate of change of many senses. As an example, having a differential node stimulated by the speed input would allow a bibite to sense it's acceleration (or deceleration) level.