Analyzing temporal features#

Time series data can be characterized using oscillatory components, but assumptions of sinusoidality are for real data rarely fulfilled. See “Brain Oscillations and the Importance of Waveform Shape” Cole et al 2017 for a great motivation. We implemented here temporal characteristics based on individual trough and peak relations, based on the :meth:~`scipy.signal.find_peaks` method. The function parameter distance can be specified in the nm_settings.json. Temporal features can be calculated twice for troughs and peaks. In the settings, this can be specified by setting estimate to true in detect_troughs and/or detect_peaks. A statistical measure (e.g. mean, max, median, var) can be defined as a resulting feature from the peak and trough estimates using the apply_estimator_between_peaks_and_troughs setting.

In py_neuromodulation the following characteristics are implemented:

Note

The nomenclature is written here for sharpwave troughs, but detection of peak characteristics can be computed in the same way.

  • prominence: \(V_{prominence} = |\frac{V_{peak-left} + V_{peak-right}}{2}| - V_{trough}\)

  • sharpness: \(V_{sharpnesss} = \frac{(V_{trough} - V_{trough-5 ms}) + (V_{trough} - V_{trough+5 ms})}{2}\)

  • rise and decay rise time

  • rise and decay steepness

  • width (between left and right peaks)

  • interval (between troughs)

Additionally, different filter ranges can be parametrized using the filter_ranges_hz setting. Filtering is necessary to remove high frequent signal fluctuations, but limits also the true estimation of sharpness and prominence due to signal smoothing.

import seaborn as sb
from matplotlib import pyplot as plt
from scipy import signal
import numpy as np

import py_neuromodulation as nm
from py_neuromodulation import (
    nm_define_nmchannels,
    nm_IO,
    nm_settings,
)

We will first read the example ECoG data and plot the identified features on the filtered time series.

RUN_NAME, PATH_RUN, PATH_BIDS, PATH_OUT, datatype = nm_IO.get_paths_example_data()

(
    raw,
    data,
    sfreq,
    line_noise,
    coord_list,
    coord_names,
) = nm_IO.read_BIDS_data(
        PATH_RUN=PATH_RUN,
        BIDS_PATH=PATH_BIDS, datatype=datatype
)
Extracting parameters from /home/docs/checkouts/readthedocs.org/user_builds/py-neuromodulation/envs/latest/lib/python3.11/site-packages/py_neuromodulation/data/sub-testsub/ses-EphysMedOff/ieeg/sub-testsub_ses-EphysMedOff_task-gripforce_run-0_ieeg.vhdr...
Setting channel info structure...
Reading channel info from /home/docs/checkouts/readthedocs.org/user_builds/py-neuromodulation/envs/latest/lib/python3.11/site-packages/py_neuromodulation/data/sub-testsub/ses-EphysMedOff/ieeg/sub-testsub_ses-EphysMedOff_task-gripforce_run-0_channels.tsv.
Reading electrode coords from /home/docs/checkouts/readthedocs.org/user_builds/py-neuromodulation/envs/latest/lib/python3.11/site-packages/py_neuromodulation/data/sub-testsub/ses-EphysMedOff/ieeg/sub-testsub_ses-EphysMedOff_space-mni_electrodes.tsv.
settings = nm_settings.get_default_settings()
settings = nm_settings.set_settings_fast_compute(settings)

settings["features"]["fft"] = True
settings["features"]["bursts"] = False
settings["features"]["sharpwave_analysis"] = True
settings["features"]["coherence"] = False

settings["sharpwave_analysis_settings"]["estimator"]["mean"] = []
for sw_feature in list(
    settings["sharpwave_analysis_settings"]["sharpwave_features"].keys()
):
    settings["sharpwave_analysis_settings"]["sharpwave_features"][sw_feature] = True
    settings["sharpwave_analysis_settings"]["estimator"]["mean"].append(sw_feature)

nm_channels = nm_define_nmchannels.set_channels(
    ch_names=raw.ch_names,
    ch_types=raw.get_channel_types(),
    reference="default",
    bads=raw.info["bads"],
    new_names="default",
    used_types=("ecog", "dbs", "seeg"),
    target_keywords=["MOV_RIGHT"]
)

stream = nm.Stream(
    sfreq=sfreq,
    nm_channels=nm_channels,
    settings=settings,
    line_noise=line_noise,
    coord_list=coord_list,
    coord_names=coord_names,
    verbose=False,
)
sw_analyzer = stream.run_analysis.features.features[1]

The plotted example time series, visualized on a short time scale, shows the relation of identified peaks, troughs, and estimated features:

data_plt = data[5, 1000:4000]


sw_analyzer._initialize_sw_features()
filtered_dat = np.convolve(
    data_plt,
    sw_analyzer.list_filter[0][1],
    mode="same"
)
#filtered_dat = filtered_dat[500:-500]

troughs = signal.find_peaks(-filtered_dat, distance=10)[0]
peaks = signal.find_peaks(filtered_dat, distance=5)[0]

sw_analyzer.data_process_sw = filtered_dat
sw_analyzer.analyze_waveform()

WIDTH = BAR_WIDTH = 4
BAR_OFFSET = 50
OFFSET_TIME_SERIES = -100
SCALE_TIMESERIES = 1

hue_colors = sb.color_palette("viridis_r", 6)

plt.figure(figsize=(5, 3), dpi=300)
plt.plot(OFFSET_TIME_SERIES + data_plt, color="gray", linewidth=0.5, alpha=0.5, label="original ECoG data")
plt.plot(OFFSET_TIME_SERIES + filtered_dat*SCALE_TIMESERIES, linewidth=0.5, color="black", label="[5-30]Hz filtered data")

plt.plot(peaks, OFFSET_TIME_SERIES + filtered_dat[peaks]*SCALE_TIMESERIES, "x", label="peaks",markersize=3, color="darkgray")
plt.plot(troughs, OFFSET_TIME_SERIES + filtered_dat[troughs]*SCALE_TIMESERIES, "x", label="troughs", markersize=3, color="lightgray")

plt.bar(troughs+BAR_WIDTH, np.array(sw_analyzer.prominence)*4, bottom=BAR_OFFSET, width=WIDTH, color=hue_colors[0], label="Prominence", alpha=0.5)
plt.bar(troughs+BAR_WIDTH*2, -np.array(sw_analyzer.sharpness)*6, bottom=BAR_OFFSET, width=WIDTH, color=hue_colors[1], label="Sharpness", alpha=0.5)
plt.bar(troughs+BAR_WIDTH*3, np.array(sw_analyzer.interval)*5, bottom=BAR_OFFSET, width=WIDTH, color=hue_colors[2], label="Interval", alpha=0.5)
plt.bar(troughs+BAR_WIDTH*4, np.array(sw_analyzer.rise_time)*5, bottom=BAR_OFFSET, width=WIDTH, color=hue_colors[3], label="Rise time", alpha=0.5)

plt.xticks(np.arange(0, data_plt.shape[0], 200), np.round(np.arange(0, int(data_plt.shape[0]/1000), 0.2), 2))
plt.xlabel("Time [s]")
plt.title("Temporal waveform shape features")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylim(-550, 700)
plt.xlim(0, 200)
plt.ylabel("a.u.")
plt.tight_layout()
Temporal waveform shape features

See in the following example a time series example, that is aligned to movement. With movement onset the prominence, sharpness, and interval features are reduced:

plt.figure(figsize=(8, 5), dpi=300)
plt.plot(OFFSET_TIME_SERIES + data_plt, color="gray", linewidth=0.5, alpha=0.5, label="original ECoG data")
plt.plot(OFFSET_TIME_SERIES + filtered_dat*SCALE_TIMESERIES, linewidth=0.5, color="black", label="[5-30]Hz filtered data")

plt.plot(peaks, OFFSET_TIME_SERIES + filtered_dat[peaks]*SCALE_TIMESERIES, "x", label="peaks",markersize=3, color="darkgray")
plt.plot(troughs, OFFSET_TIME_SERIES + filtered_dat[troughs]*SCALE_TIMESERIES, "x", label="troughs", markersize=3, color="lightgray")

plt.bar(troughs+BAR_WIDTH, np.array(sw_analyzer.prominence)*4, bottom=BAR_OFFSET, width=WIDTH, color=hue_colors[0], label="Prominence", alpha=0.5)
plt.bar(troughs+BAR_WIDTH*2, -np.array(sw_analyzer.sharpness)*6, bottom=BAR_OFFSET, width=WIDTH, color=hue_colors[1], label="Sharpness", alpha=0.5)
plt.bar(troughs+BAR_WIDTH*3, np.array(sw_analyzer.interval)*5, bottom=BAR_OFFSET, width=WIDTH, color=hue_colors[2], label="Interval", alpha=0.5)
plt.bar(troughs+BAR_WIDTH*4, np.array(sw_analyzer.rise_time)*5, bottom=BAR_OFFSET, width=WIDTH, color=hue_colors[3], label="Rise time", alpha=0.5)

plt.axvline(x=1500, label="Movement start", color="red")

#plt.xticks(np.arange(0, 2000, 200), np.round(np.arange(0, 2, 0.2), 2))
plt.xticks(np.arange(0, data_plt.shape[0], 200), np.round(np.arange(0, int(data_plt.shape[0]/1000), 0.2), 2))
plt.xlabel("Time [s]")
plt.title("Temporal waveform shape features")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.ylim(-450, 400)
plt.ylabel("a.u.")
plt.tight_layout()
Temporal waveform shape features

In the sharpwave_analysis_settings the estimator keyword further specifies which statistic is computed based on the individual features in one batch. The “global” setting segment_length_features_ms specifies the time duration for feature computation. Since there can be a different number of identified waveform shape features for different batches (i.e. different number of peaks/troughs), taking a statistical measure (e.g. the maximum or mean) will be necessary for feature comparison.

Example time series computation for movement decoding#

We will now read the ECoG example/data and investigate if samples differ across movement states. Therefore we compute features and enable the default sharpwave features.

settings = nm_settings.get_default_settings()
settings = nm_settings.reset_settings(settings)
settings["features"]["sharpwave_analysis"] = True
settings["sharpwave_analysis_settings"]["interval"] = False
settings["sharpwave_analysis_settings"]["filter_ranges"] = [[5, 80]]

nm_channels["used"] = 0  # set only two ECoG channels for faster computation to true
nm_channels.loc[[3, 8], "used"] = 1

stream = nm.Stream(
    sfreq=sfreq,
    nm_channels=nm_channels,
    settings=settings,
    line_noise=line_noise,
    coord_list=coord_list,
    coord_names=coord_names,
    verbose=True,
)

df_features = stream.run(data=data[:, :30000])
Last batch took: 0.01 seconds
1.0 seconds of data processed
Last batch took: 0.01 seconds
1.1 seconds of data processed
Last batch took: 0.01 seconds
1.2 seconds of data processed
Last batch took: 0.01 seconds
1.3 seconds of data processed
Last batch took: 0.01 seconds
1.4 seconds of data processed
Last batch took: 0.01 seconds
1.5 seconds of data processed
Last batch took: 0.01 seconds
1.6 seconds of data processed
Last batch took: 0.01 seconds
1.7 seconds of data processed
Last batch took: 0.01 seconds
1.8 seconds of data processed
Last batch took: 0.01 seconds
1.9 seconds of data processed
Last batch took: 0.01 seconds
2.0 seconds of data processed
Last batch took: 0.01 seconds
2.1 seconds of data processed
Last batch took: 0.01 seconds
2.2 seconds of data processed
Last batch took: 0.01 seconds
2.3 seconds of data processed
Last batch took: 0.01 seconds
2.4 seconds of data processed
Last batch took: 0.01 seconds
2.5 seconds of data processed
Last batch took: 0.01 seconds
2.6 seconds of data processed
Last batch took: 0.01 seconds
2.7 seconds of data processed
Last batch took: 0.01 seconds
2.8 seconds of data processed
Last batch took: 0.01 seconds
2.9 seconds of data processed
Last batch took: 0.01 seconds
3.0 seconds of data processed
Last batch took: 0.01 seconds
3.1 seconds of data processed
Last batch took: 0.01 seconds
3.2 seconds of data processed
Last batch took: 0.01 seconds
3.3 seconds of data processed
Last batch took: 0.01 seconds
3.4 seconds of data processed
Last batch took: 0.01 seconds
3.5 seconds of data processed
Last batch took: 0.01 seconds
3.6 seconds of data processed
Last batch took: 0.01 seconds
3.7 seconds of data processed
Last batch took: 0.01 seconds
3.8 seconds of data processed
Last batch took: 0.01 seconds
3.9 seconds of data processed
Last batch took: 0.01 seconds
4.0 seconds of data processed
Last batch took: 0.01 seconds
4.1 seconds of data processed
Last batch took: 0.01 seconds
4.2 seconds of data processed
Last batch took: 0.01 seconds
4.3 seconds of data processed
Last batch took: 0.01 seconds
4.4 seconds of data processed
Last batch took: 0.01 seconds
4.5 seconds of data processed
Last batch took: 0.01 seconds
4.6 seconds of data processed
Last batch took: 0.01 seconds
4.7 seconds of data processed
Last batch took: 0.01 seconds
4.8 seconds of data processed
Last batch took: 0.01 seconds
4.9 seconds of data processed
Last batch took: 0.01 seconds
5.0 seconds of data processed
Last batch took: 0.01 seconds
5.1 seconds of data processed
Last batch took: 0.01 seconds
5.2 seconds of data processed
Last batch took: 0.01 seconds
5.3 seconds of data processed
Last batch took: 0.01 seconds
5.4 seconds of data processed
Last batch took: 0.01 seconds
5.5 seconds of data processed
Last batch took: 0.01 seconds
5.6 seconds of data processed
Last batch took: 0.01 seconds
5.7 seconds of data processed
Last batch took: 0.01 seconds
5.8 seconds of data processed
Last batch took: 0.01 seconds
5.9 seconds of data processed
Last batch took: 0.01 seconds
6.0 seconds of data processed
Last batch took: 0.01 seconds
6.1 seconds of data processed
Last batch took: 0.01 seconds
6.2 seconds of data processed
Last batch took: 0.01 seconds
6.3 seconds of data processed
Last batch took: 0.01 seconds
6.4 seconds of data processed
Last batch took: 0.01 seconds
6.5 seconds of data processed
Last batch took: 0.01 seconds
6.6 seconds of data processed
Last batch took: 0.01 seconds
6.7 seconds of data processed
Last batch took: 0.01 seconds
6.8 seconds of data processed
Last batch took: 0.01 seconds
6.9 seconds of data processed
Last batch took: 0.01 seconds
7.0 seconds of data processed
Last batch took: 0.01 seconds
7.1 seconds of data processed
Last batch took: 0.01 seconds
7.2 seconds of data processed
Last batch took: 0.01 seconds
7.3 seconds of data processed
Last batch took: 0.01 seconds
7.4 seconds of data processed
Last batch took: 0.01 seconds
7.5 seconds of data processed
Last batch took: 0.01 seconds
7.6 seconds of data processed
Last batch took: 0.01 seconds
7.7 seconds of data processed
Last batch took: 0.01 seconds
7.8 seconds of data processed
Last batch took: 0.01 seconds
7.9 seconds of data processed
Last batch took: 0.01 seconds
8.0 seconds of data processed
Last batch took: 0.01 seconds
8.1 seconds of data processed
Last batch took: 0.01 seconds
8.2 seconds of data processed
Last batch took: 0.01 seconds
8.3 seconds of data processed
Last batch took: 0.01 seconds
8.4 seconds of data processed
Last batch took: 0.01 seconds
8.5 seconds of data processed
Last batch took: 0.01 seconds
8.6 seconds of data processed
Last batch took: 0.01 seconds
8.7 seconds of data processed
Last batch took: 0.01 seconds
8.8 seconds of data processed
Last batch took: 0.01 seconds
8.9 seconds of data processed
Last batch took: 0.01 seconds
9.0 seconds of data processed
Last batch took: 0.01 seconds
9.1 seconds of data processed
Last batch took: 0.01 seconds
9.2 seconds of data processed
Last batch took: 0.01 seconds
9.3 seconds of data processed
Last batch took: 0.01 seconds
9.4 seconds of data processed
Last batch took: 0.01 seconds
9.5 seconds of data processed
Last batch took: 0.01 seconds
9.6 seconds of data processed
Last batch took: 0.01 seconds
9.7 seconds of data processed
Last batch took: 0.01 seconds
9.8 seconds of data processed
Last batch took: 0.01 seconds
9.9 seconds of data processed
Last batch took: 0.01 seconds
10.0 seconds of data processed
Last batch took: 0.01 seconds
10.1 seconds of data processed
Last batch took: 0.01 seconds
10.2 seconds of data processed
Last batch took: 0.01 seconds
10.3 seconds of data processed
Last batch took: 0.01 seconds
10.4 seconds of data processed
Last batch took: 0.01 seconds
10.5 seconds of data processed
Last batch took: 0.01 seconds
10.6 seconds of data processed
Last batch took: 0.01 seconds
10.7 seconds of data processed
Last batch took: 0.01 seconds
10.8 seconds of data processed
Last batch took: 0.01 seconds
10.9 seconds of data processed
Last batch took: 0.01 seconds
11.0 seconds of data processed
Last batch took: 0.01 seconds
11.1 seconds of data processed
Last batch took: 0.01 seconds
11.2 seconds of data processed
Last batch took: 0.01 seconds
11.3 seconds of data processed
Last batch took: 0.01 seconds
11.4 seconds of data processed
Last batch took: 0.01 seconds
11.5 seconds of data processed
Last batch took: 0.01 seconds
11.6 seconds of data processed
Last batch took: 0.01 seconds
11.7 seconds of data processed
Last batch took: 0.01 seconds
11.8 seconds of data processed
Last batch took: 0.01 seconds
11.9 seconds of data processed
Last batch took: 0.01 seconds
12.0 seconds of data processed
Last batch took: 0.01 seconds
12.1 seconds of data processed
Last batch took: 0.01 seconds
12.2 seconds of data processed
Last batch took: 0.01 seconds
12.3 seconds of data processed
Last batch took: 0.01 seconds
12.4 seconds of data processed
Last batch took: 0.01 seconds
12.5 seconds of data processed
Last batch took: 0.01 seconds
12.6 seconds of data processed
Last batch took: 0.01 seconds
12.7 seconds of data processed
Last batch took: 0.01 seconds
12.8 seconds of data processed
Last batch took: 0.01 seconds
12.9 seconds of data processed
Last batch took: 0.01 seconds
13.0 seconds of data processed
Last batch took: 0.01 seconds
13.1 seconds of data processed
Last batch took: 0.01 seconds
13.2 seconds of data processed
Last batch took: 0.01 seconds
13.3 seconds of data processed
Last batch took: 0.01 seconds
13.4 seconds of data processed
Last batch took: 0.01 seconds
13.5 seconds of data processed
Last batch took: 0.01 seconds
13.6 seconds of data processed
Last batch took: 0.01 seconds
13.7 seconds of data processed
Last batch took: 0.01 seconds
13.8 seconds of data processed
Last batch took: 0.01 seconds
13.9 seconds of data processed
Last batch took: 0.01 seconds
14.0 seconds of data processed
Last batch took: 0.01 seconds
14.1 seconds of data processed
Last batch took: 0.01 seconds
14.2 seconds of data processed
Last batch took: 0.01 seconds
14.3 seconds of data processed
Last batch took: 0.01 seconds
14.4 seconds of data processed
Last batch took: 0.01 seconds
14.5 seconds of data processed
Last batch took: 0.01 seconds
14.6 seconds of data processed
Last batch took: 0.01 seconds
14.7 seconds of data processed
Last batch took: 0.01 seconds
14.8 seconds of data processed
Last batch took: 0.01 seconds
14.9 seconds of data processed
Last batch took: 0.01 seconds
15.0 seconds of data processed
Last batch took: 0.01 seconds
15.1 seconds of data processed
Last batch took: 0.01 seconds
15.2 seconds of data processed
Last batch took: 0.01 seconds
15.3 seconds of data processed
Last batch took: 0.01 seconds
15.4 seconds of data processed
Last batch took: 0.01 seconds
15.5 seconds of data processed
Last batch took: 0.01 seconds
15.6 seconds of data processed
Last batch took: 0.01 seconds
15.7 seconds of data processed
Last batch took: 0.01 seconds
15.8 seconds of data processed
Last batch took: 0.01 seconds
15.9 seconds of data processed
Last batch took: 0.01 seconds
16.0 seconds of data processed
Last batch took: 0.01 seconds
16.1 seconds of data processed
Last batch took: 0.01 seconds
16.2 seconds of data processed
Last batch took: 0.01 seconds
16.3 seconds of data processed
Last batch took: 0.01 seconds
16.4 seconds of data processed
Last batch took: 0.01 seconds
16.5 seconds of data processed
Last batch took: 0.01 seconds
16.6 seconds of data processed
Last batch took: 0.01 seconds
16.7 seconds of data processed
Last batch took: 0.01 seconds
16.8 seconds of data processed
Last batch took: 0.01 seconds
16.9 seconds of data processed
Last batch took: 0.01 seconds
17.0 seconds of data processed
Last batch took: 0.01 seconds
17.1 seconds of data processed
Last batch took: 0.01 seconds
17.2 seconds of data processed
Last batch took: 0.01 seconds
17.3 seconds of data processed
Last batch took: 0.01 seconds
17.4 seconds of data processed
Last batch took: 0.01 seconds
17.5 seconds of data processed
Last batch took: 0.01 seconds
17.6 seconds of data processed
Last batch took: 0.01 seconds
17.7 seconds of data processed
Last batch took: 0.01 seconds
17.8 seconds of data processed
Last batch took: 0.01 seconds
17.9 seconds of data processed
Last batch took: 0.01 seconds
18.0 seconds of data processed
Last batch took: 0.01 seconds
18.1 seconds of data processed
Last batch took: 0.01 seconds
18.2 seconds of data processed
Last batch took: 0.01 seconds
18.3 seconds of data processed
Last batch took: 0.01 seconds
18.4 seconds of data processed
Last batch took: 0.01 seconds
18.5 seconds of data processed
Last batch took: 0.01 seconds
18.6 seconds of data processed
Last batch took: 0.01 seconds
18.7 seconds of data processed
Last batch took: 0.01 seconds
18.8 seconds of data processed
Last batch took: 0.01 seconds
18.9 seconds of data processed
Last batch took: 0.01 seconds
19.0 seconds of data processed
_SIDECAR.json saved to /home/docs/checkouts/readthedocs.org/user_builds/py-neuromodulation/checkouts/latest/examples/sub/sub_SIDECAR.json
FEATURES.csv saved to /home/docs/checkouts/readthedocs.org/user_builds/py-neuromodulation/checkouts/latest/examples/sub/sub_FEATURES.csv
settings.json saved to /home/docs/checkouts/readthedocs.org/user_builds/py-neuromodulation/checkouts/latest/examples/sub/sub_SETTINGS.json
nm_channels.csv saved to /home/docs/checkouts/readthedocs.org/user_builds/py-neuromodulation/checkouts/latest/examples/sub/sub_nm_channels.csv

We can then plot two exemplary features, prominence and interval, and see that the movement amplitude can be clustered with those two features alone:

plt.figure(figsize=(5, 3), dpi=300)
plt.scatter(
    df_features["ECOG_RIGHT_0-avgref_Sharpwave_Max_prominence_range_5_80"],
    df_features["ECOG_RIGHT_5-avgref_Sharpwave_Mean_interval_range_5_80"],
    c=df_features["MOV_RIGHT"], alpha=0.8, s=30
)
cbar = plt.colorbar()
cbar.set_label("Movement amplitude")
plt.xlabel("Prominence a.u.")
plt.ylabel("Interval a.u.")
plt.title("Temporal features predict movement amplitude")
plt.tight_layout()
Temporal features predict movement amplitude

Total running time of the script: (0 minutes 3.877 seconds)

Gallery generated by Sphinx-Gallery