Use Case description

This extension focus on decoding with or without encoding in parallel.

Requirement:

First requirement, a minimal decoding model serialized in hdf5 is required. This can be done in python once for every experiment. See the pycompressed-decoder documentation for more informations.

Some points are important to decide at this step and will influence a lot the real-time pipeline:

  • spike features name needs to match the ones available in the spikeFeature processor (timestamp, amplitude, slope, channel index, channel depth)

  • spatial features are more flexible but still needs to match the features names given by the position tracking system used.

  • how spike features are split in a number of likelihoods used in the model. Usual split could be based on the channels:
    • 1:1 (one likelihood by channel)

    • 1:n (1 likelihood for all channels)

    • x:n/x (n/x likelihood containing x channels)

A yaml file (sensormap.yaml) describing this split needs to given in input of the model.

likelihoods_0_0:
- 0
likelihoods_1_0:
- 1

...

The decoder folder need to container the decoder.hdf5 file + each likelihood in their own file named Likelihoods_[number].hdf5

Then, the model can be trained in advance and used in a decoding-only graph. But it seems that training a new model for each session is needed. In that case, the session needs to be split in two to generate and record the data, then trained the model and finally used this model with the decoding falcon graph in the second part of the session. A way to overcome this restriction is to used a decoding/encoding falcon graph where the model is trained in real-time in parallel of the experiment.

The second requirement would be about the computer used for running falcon. The graphs described here are generating a huge number of threads highly depending of the number of channels to process and the number of cpu parallelization possible. The result will be impact the processing speed. For example, for 384 channels, this graphs were tested with offline data on a supercomputer with 64 (virtual) cpus.