Interpolation with PyDynamic.uncertainty.interpolate.interp1d_unc

In this series of notebooks we illustrate the use of our method interp1d_unc, which is very much inspired by Scipy’s *interp1d* method and therefore closely aligned with its signature and corresponding capabilities. The main features are:

  • interpolation of measurement values and associated uncertainties

  • available interpolation methods are linear, cubic, next, nearest, previous (aligned with scipy.interpolate.interp1d)

  • extrapolation of measurement values and associated uncertainties based on the values at the original data’s bounds or by custom input

  • returning sensitivity coefficients

  • performance oriented optional parameters

  • copy

  • assume_sorted

Comprehensive details about the parameters meanings, you find on pydynamic.readthedocs.io.

Content

The examples proceed according to the following scheme.

01 Basic measurement data pre-processing.ipynb

  • Set up the appropriate execution and plotting environment.

  • Download an example data set of real sensor recordings from Zenodo.org.

  • Visualize the relevant part of the contained data.

02 Basic interpolation.ipynb

  • Conduct a simple interpolation.

03 Basic extrapolation.ipynb

  • Conduct simple interpolation and extrapolation outside the original bounds.

  • Demonstrate returning the sensitivity coefficients

Setup the Python environment

[1]:
import holoviews as hv
import numpy as np
import h5py
import pickle
from download import download

Setup plotting environment and labels

[2]:
# Set one of the available plotting backends ('plotly', 'bokeh', 'maplotlib').
hv.extension("bokeh")

# Define labels and units for plots.
timestamp_labels = hv.Dimension("relative measurement time", unit="s")
measurement_labels = hv.Dimension("Primary Nominal Current", unit="A")

Download sample data

This step of course is only necessary once, but it checks on execution, if the expected file is present. Thus you can safely execute it without producing unnecessary network traffic or waiting times. We use a sample data set of real measured motor current data which you can find on Zenodo:

DOI

The data actually used in this tutorial is contained in the axis 3 file.

In case you are having trouble downloading the data via the following code, please download it manually from the provided URL und store it as PyDynamic_tutorials/datasets/axis3_2kHz.h5.

[3]:
# Set URL and extract filename.
url = "https://zenodo.org/record/3929385/files/axis3_2kHz.h5"
filename = "".join(("../datasets/", url.split("/")[-1]))

path = download(url, filename, replace=False, verbose=True)
Downloading data from https://zenodo.org/record/3929385/files/axis3_2kHz.h5 (1.03 GB)

file_sizes: 100%|██████████████████████████| 1.11G/1.11G [07:56<00:00, 2.32MB/s]
Successfully downloaded file to ../datasets/axis3_2kHz.h5

Unpack and prepare data to visualize sample set

We extract only a small excerpt of the data, which can be used well for demonstration purposes. According to the datasets documentation on Zenodo, we have to convert the measurement values to receive SI units as stated in the included PDF file.

[4]:
# Read the h5-file.
hf = h5py.File(path, "r")

# Set well-suited range of sensor measurements.
start = 2
n_nodes = 4
end = start + n_nodes

# Extract well-suited sensor measurements.
data_points_measured = hf["Sensor_Data"][8, start:end, 0]

# Free memory.
hf.close()

# Convert extraction into SI units.
data_points_measured_si = (
    ((data_points_measured * 8.76e-5) + 1.36e-2) * 5.299641744 * 2.0
)
data_points_measured_si
[4]:
array([2.47691988, 4.1246116 , 3.95322654, 1.41385091])
[5]:
# Determine measurement uncertainties from data sheet stating 1.5% of measured value.
data_points_measured_si_unc = data_points_measured_si * 0.015
data_points_measured_si_unc
[5]:
array([0.0371538 , 0.06186917, 0.0592984 , 0.02120776])
[6]:
# Construct relative time stamps from the beginning of the measurement based on the
# known frequency of 2 kHz in milliseconds.
t = np.linspace(start=start * 2e-3, stop=(end - 1) * 2e-3, num=n_nodes)
t
[6]:
array([0.004, 0.006, 0.008, 0.01 ])

Visualize the original data points

[7]:
original_curve = hv.Curve(
    (t, data_points_measured_si),
    timestamp_labels,
    measurement_labels,
    label="measurements",
)

original_uncertainties_plot = hv.Spread(
    (t, data_points_measured_si, data_points_measured_si_unc),
    vdims=[measurement_labels, "Associated Uncertainty"],
    kdims=timestamp_labels,
    label="uncertainties",
)

original_plot = (original_uncertainties_plot * original_curve).opts(
    title="Measured values and associated uncertainties",
)

original_plot.opts(width=600, height=800, legend_position="top_right")
[7]:

Store results to be used in later lessons

[8]:
np.save("data_points", data_points_measured_si)
np.save("data_points_unc", data_points_measured_si_unc)
np.save("time_stamps", t)
with open("original_plot.p", "wb") as f:
    pickle.dump(original_plot, f)