Training the decoding model on real-time

Reading the position data

The position data are streamed from your own tracking device position. Fklab has its own system streaming (at least) the positions x, y and the head orientation.

This data is used both for computing a speed threshold enable/disabling the decoding/encoding processing. Below the threshold, we want to decode based on the spike data. Upper the threshold, data is used to train the model.

Reading the spike data

All the pipeline for the neural data described before in the decoding pipeline can be reused here. The output of the SpikeFeature processor can be directly reused in input of the spatialFeature processor.

Compute the spatial features

In this processor, in addition of the spikefeature, spatial feature will be computed based on the position data. Features are given in the graph. It needs to match both the spatial feature names given in the offline model and the labels of the data given in input by the tracking system.

Note

The position data is coming at a slower frequency usually 25Hz. For the moment, there is no resynchronization of both data input. It takes the data as it arrives. It could be improve later if needed by synchronizing the two clocks and then matching the timestamps.

Train and update the model

The encoding model trained the model based on the feature when the to_encode state is activated (based on the speed threshold). It is based on the pycompressed-decoder lib. The option defined if the model should be saved at the end and the update frequency of the model used in the decoding pipeline.