Nodes are a part of the bibite's brain. They are analogous to the neurons of an animal's brain. Nodes hold a value that represents their activation level, and are able to stimulate other nodes through synapses, representing synaptic connections, that takes the activation level of a node and uses it to stimulate another. Nodes exist as 3 main types:
Input nodes: Representing the senses of the bibites, either internal (own state) or external (sensing the environment).
Output nodes: Used for all the actions the bibites can execute and all the internal processes they have control over.
Hidden nodes: Intermediary nodes that don't have a physical function, but that can be used to further process the signals from the input nodes before transmitting their own signal further in the propagation's chain.
Note : It is eventually planned to expand their use so that they would be able to represent a larger amount of analogs like physical characteristics, or hormone levels, but it is presently not the case.
Stimulation[]
Nodes are stimulated by other nodes through synapses.
The stimulation value from a particular connection is equal to to stimulating node's activation level, multiplied by the synapse's strength.
As an example, if a connection with a strength of -1.5 connects a node with an activation level of 1.1 to another node, that receiving node will experience a stimulus of -1.65 from that connection.
Stimulation Accumulation[]
Most neurons only accumulate signals through summative accumulation, meaning that the total stimuli they perceive is the sum of all the individual stimulations they receive. The only exception is the Mult node, which accumulates using the product of all stimulations instead of the sum.
For example, if a non-mult node receives 3 stimulation of the following values: -1.2 , 2.5, and 0.6, it's total stimuli will be 1.9. If a mult noe receives the same 3 inputs, its total stimuli will be -1.8.
Stimulation Range[]
The stimulation range of a node represents the complete range of possible total stimulation it could potentially receive.
As an examples, if a node receive the two following connections:
-A first connection of strength 2.1 coming from a node with a possible output range of [0:1]
-And a second connection of strength -3.5 coming from another node with a possible output range of [-1:1]
Would result in a node with a stimulation range of [-3.5 : 5.6]
This can be an useful tool when studying the possible dynamics and behaviors that a particular network can produce.
Activation Functions[]
Except for input nodes, which can't receive synaptic stimulations from other nodes (their value being set by their respective systems), each neurons have an activation function defining how it responds to stimuli.
A node's total stimuli is passed through the activation function of the node to determine its resulting activation level.
Sigmoid Nodes (SIG):[]
Sigmoid Properties | |
---|---|
Output Range | 0.0 to 1.0 |
Default Value
(when not stimulated) |
0.5 |
Formula | |
Index | 1 |
The Sigmoid function is the default activation function of most nodes.
It is very popular in the field of artificial intelligence. Its activation value is bounded between 0 and 1 and is therefore very useful for nodes that represent something where a value outside of theses bounds wouldn't make much sense.
Its default value of 0.5 means that it'll still present a signal when the node doesn't receive external stimulations.
It is then best used when it represents a state or desire that should have some activation by default.
Linear Nodes (LIN):[]
Linear Properties | |
---|---|
Output Range | -100 to 100 |
Default Value
(when not stimulated) |
0 |
Formula | |
Index | 2 |
The Linear function is as simple as you can get.
It simply output the total stimulation it receives.
However, in order to prevent processing complications, its output value will still be capped between -100 and +100, still leaving a more than reasonable window of activation.
It is best suited for states that can range from very low values to very high values.
Hyperbolic Tangent Nodes (TanH):[]
TanH Properties | |
---|---|
Output Range | -1.0 to 1.0 |
Default Value
(when not stimulated) |
0 |
Formula | |
Index | 3 |
The Hyperbolic Tangent function is also very popular as an activation function in the field of Artificial Intelligence.
It displays a similar shape to the Sigmoid function, but instead ranges for -1 to 1, resulting in a default output of 0 when not stimulated.
As a result it's a more generic function that can have a wider range of use.
It is best suited for states and desires that can have a negative value translating to a meaning that make sense.
Sine Nodes (SIN):[]
Sine Properties | |
---|---|
Output Range | -1.0 to 1.0 |
Default Value
(when not stimulated) |
0 |
Formula | |
Index | 4 |
The sine function is pretty straightforward. It takes in a signal and makes it periodic.
If strongly stimulated ( [0 : 10+] ), a node using this activation function can create a periodic behavior to an input signal.
If the stimulation range is a little smaller ( [0 : ~3] ), it can be used as an optimizer, where a value just high enough would produce an output of 1, and decrease if the signal kept increasing.
If the stimulation range is small enough ( [ >-1 : <1 ] ), than the node will produce signals similar to a linear node.
Rectified Linear Nodes (ReLU):[]
ReLU Properties | |
---|---|
Output Range | 0 to 100.0 |
Default Value
(when not stimulated) |
0 |
Formula | |
Index | 5 |
Rectified Linear Units are another popular choice as an activation function in the field of Artificial Intelligence.
This Activation function is very similar to the Linear Function, but is capped in the negative range.
However, in order to prevent processing complications, its output value will be capped at +100, still leaving a more than reasonable window of activation.
It is best suited for states that can range from 0 to very high positive values.
Gaussian Nodes (GAU):[]
Gaussian Properties | |
---|---|
Output Range | 0.0 to 1.0 |
Default Value
(when not stimulated) |
1.0 |
Formula | |
Index | 6 |
The Gaussian function follows a bell shape. Its default value is 1.0 when not stimulated, and any stimulation, either positive or negative, will tend to decrease it's activation output.
This allows nodes using this function to act as "inverters".
It's also possible to use this node as a "range selector" if it's also stimulated by the constant node in addition to a real dynamic signal. As a connection to the constant node would effectively serve as an "offset", allowing the other(s) signal(s) to have a different value that would produce a maximum output.
Disclaimer: This is not really a "Gaussian" function, but it resembles the shape enough that it works similarly while being easier to compute.
But actually, this inaccuracy can be exploited to create an approximation of a division node!
The node setup is [Input] - x100 -> [Gaussian] - x100 -> [Mult], and [Input] - x100 -> [Mult]. This makes Mult's output approximate 1/Input. If you want to do another number divided by Input, then add that number as another input to the Mult node.
The idea behind this approximation is that x / (1 + x^2) is very close to 1/x when x is large. If you add a large constant like so, x * k^2 / (1 + (x * k)^2) you get a better approximation for smaller values of x. For k = 100, the error is less than 10% for x > 0.03 or x < -0.03, less than 1% for x > 0.1 or x < -0.1, and even less for larger values of x. desmos example
Latch Nodes (LAT):[]
Latch Properties | |
---|---|
Output Range | 0 or 1 |
Default Value
(when not stimulated) |
0 |
Formula | |
Index | 7 |
The Latch Function is the first non-linear function (non-linear function in the sense that the same stimulation will not always produce the same result).
Basically, the Latch Nodes hold an internal state, which is what it outputs instead of a transformation of the input stimulation.
If the total stimulation is above to 1.0, the node will set that internal value to 1.0, and if the total stimulation is below 0.0, it will set that internal value to 0.0. It will keep outputting the last value that was set when none of those conditions are met (between 0.0 and 1.0).
As such, the Latch Function can be used as a memory unit, where a stimulation above 1.0 acts as the "set", and a negative stimulation acts a the "reset".
The Latch nodes can also be useful to describe states that are either on or off, with no intermediary values.
Differential Nodes (DIF)[]
Differential Properties | |
---|---|
Output Range | -100.0 to 100.0 |
Default Value
(when not stimulated) |
0 |
Formula | |
Index | 8 |
If you went through calculus, than this node is pretty straightforward.
It outputs the variation rate of its total perceived stimulation and is normalized across different time speeds.
As some signals can vary very quickly, the node's output have been capped between -100 and +100 to prevent complications.
It can be a useful tool as it can be used to determine the rate of change of many senses. As an example, having a differential node stimulated by the speed input would allow a bibite to sense it's acceleration (or deceleration) level.
Absolute Nodes (Abs)[]
Absolute Properties | |
---|---|
Output Range | 0.0 to 100.0 |
Default Value
(when not stimulated) |
0 |
Formula | |
Index | 9 |
This is one of the simplest activation functions.
It outputs the absolute value of its total perceived stimulation. Basically, if the total is negative, this will make it positive.
It can be an useful tool if you don't care about the sign of the input signal and want to produce a reaction nonetheless.
Multiply Nodes (Mult)[]
Multiply Properties | |
---|---|
Output Range | -100.0 to 100.0 |
Default Value
(when not stimulated) |
1 |
Formula | Where
|
Index | 10 |
This has the same
activation function as the linear; i.e., it doesn't have an activation function.
The thing that is special about it is that it accumulates its inputs from synapses by multiplying them, instead of adding them.
Integrator Nodes[]
Integrator Properties | |
---|---|
Output Range | -100.0 to 100.0 |
Default Value
(when not stimulated) |
0 |
Formula | Where
|
Index | 11 |
If you went through calculus, than this node is pretty straightforward.
It keeps a running sum of each stimulation it recieves. If a stimulation lasts for more time, then it would count more towards the total, and vice versa (so it's normalized for different simulation speeds). This is the opposite of the differential node.
Inhibitory Nodes[]
Inhibitory Properties | |
---|---|
Output Range | -100.0 to 100.0 |
Default Value
(when not stimulated) |
0 |
Formula | Where
|
Index | 12 |
These are complicated.
If the bias is zero, then these behave the same as a linear.
Otherwise, a new number will be subtracted from the output that slowly grows over time to match the current input, so that the output slowly decays towards zero.
For example, imagine one tick the input is at 0, and the inhibitor is at 0. Now imagine that the next tick the input suddenly moves to 1, now the inhibitor will be at 1. Pretty simple right? Well, if the inhibitor's input doesn't change, it will slowly decay and the inhibitor's output will go back down to 0. So 0.9, 0.81, 0.729, etc.
Now let's say the input suddenly changes to 0. Now the output will be (approximately) -0.34, because the neuron still "remembers" that it needed to subtract something in order to move the output towards 0, and it hasn't fully caught up yet. In the next few ticks, it will keep decaying towards zero: (approximately) -0.31, -0.28, -0.25...