Signal processing
The sigpro subpackage provides low-level signal processing tools operating on sequences given as 1D numpy arrays. The streams subpackage builds on those tools to provide the same functionality for chunked signal processing.
Functions of time
Since functions of time are used a lot, it is good practice to not just use plain numpy arrays and callables but define dedicated classes. The sigpro package provides suitable classes for this purpose.
To pass around a 1D numpy array representing samples or sample times, we define a class SeqChunk which simply wraps a numpy array and makes sure it is onedimensional. To pass around functions of time, we define a class FuncOfTime. This class simply wraps any function that operates on numpy arrays. It provides an interface accepting and returning SeqChunk instances. It also checks that the wrapped function returns the same shape. Further, FuncOfTime have a dtype property and enusure that the wrapped functions returns the correct data type. Compared to using plain numpy arrays and callables, those classes allow stricter type annotations and improve readability by expressing intent explicitly.
An important optimization is to allow passing constant sequences and constant functions of time as scalars. To support this, we define classes ConstSeqChunk and ConstFuncOfTime which represent the constant counterparts of SeqChunk and FuncOfTime. Since ConstSeqChunk represents a SeqChunk with constant values, it has a size property as well. Calling a ConstFuncOfTime will result in a ConstSeqChunk with same size as the input.
The general aim in the simulator is to propagate constant-ness. Therefore, the streams package has similar counterparts. For example, there is a constant stream type and a stream type that samples a FuncOfTime. Further, there is a function that accepts FuncOfTime and ConstFuncOfTime, and returns either a sampling stream or a constant stream.
FIR filtering
Finite impulse response (FIR) filtering is implemented in the fir_filters_numpy. It consists of a few components each serving one specific purpose. For example, we separate the filter definition from filter application. Further, filtering and application of boundary conditions are separate. There are a couple of convenience functions combining those blocks for direct use. Also, there are functions to create specific parametrized FIR filters needed in the simulator (Kaiser windows). However, the main purpose of this module is to provide low-level filtering functionality to the streams package, which provides FIR filtering for chunked processing.
The fir_filters_numpy module provides the following low-level components for FIR filtering.
DefFilterFIR is a data class that defines the filter. It contains the coefficients and the offset of the corresponding convolution kernel.
EdgeHandling is an enum class defining choices for boundary conditions.
SemiLocalMapType is a protocol for operators on 1D arrays such as filtering, which depend on a fixed number of margin points left and right.
FIRCoreOp implements the SemiLocalMapType protocol. It applies a given FIR filter to 1D numpy arrays. Boundary conditions are not part of this, only valid points are computed.
semilocal_map_numpy is a function applying a given SemiLocalMapType to a numpy array, and which can also apply up boundary conditions such as zero padding.
filter_fir_numpy combines the above to directly apply a DefFilterFIR to a numpy array, with boundary conditions
make_filter_fir_numpy is similar to filter_fir_numpy, but binds the definition and boundary condition arguments to create a one-parameter operator mapping 1D numpy arrays to filtered arrays.
make_fir_causal_kaiser creates the DefFilterFIR describing a Kaiser window.
Applying FIR filters is implemented using the convolve function from scipy.signal.
Note
In theory, the overlap-add method might be faster for our use cases, but superficial tests of implementation available in scipy were not promising. Another potential optimization would be to precompute the FFT of the filter coefficients to avoid duplicating this work when applying the same filter to multiple data sets. The latter is relevant for our use cases.
IIR filtering
Infinite impulse response (IIR) filtering is implemented in the iir_filters_numpy module. As for the FIR filters, there are convenience functions for direct use, but the main purpose is to provide the low-level algorithms for use in the streams package. We also separate the filter definition from the filter application. The main components are the following.
DefFilterIIR is a data class that defines the filter. It just contains the IIR filter coefficients.
IIRFilterIC is an enum class defining choices for the initial conditions of the filter.
SequentialNumpyMapType is a protocol describing something that can process segments of arrays in a sequential manner while keeping an internal state and providing a way to initialize the state. Another operation besides IIR filters that can be described in such terms is the cumulative summation.
IIRCoreOp implements the SequentialNumpyMapType protocol for IIR filters.
IIRChainCoreOp is like IIRCoreOp, but aplying a whole chain of IIR filters at once.
The following functions allow direct use off IIR filters:
filter_iir_numpy applies a IIR filter definition and initial condition choice to a 1D numpy array
make_filter_iir_numpy creates an IIR filter operator acting on 1D numpy arrays from filter definition and initial condition choice.
For a few specific IIR filter types, the following functions provide definitions in terms of DefFilterIIR
make_iir_def_derivative: first order finite differences
make_iir_def_cumsum: cumulative summation
make_iir_def_trapezoidal: integration using trapezoidal rule
For use in optimizations involving constant sequences, the functions get_iir_steady_state and get_iir_chain_steady_state return the nonzero steady state solution of IIR filters that have one (counterexample: cumsum).
IIR filtering is implemented using scipy.signal.lfilter.
Interpolation
Interpolation on regularly spaced 1D grids is implemented in the regular_interpolators module. The architecture is driven by the requirements of the simulator.
Different interpolation implementations should be easily exchangeable, meaning all interpolators should provide a common interface.
All interpolators should require a finite amount of margin points around a given interpolation point and allow to query the number of required points on each side.
Interface for bare interpolation without boundary handling needed for use in chunked processing.
Using the interpolation to apply time shifting operations with shifts much smaller than the absolute time values should not degrade accuracy.
The main components are the following
RegularInterpCore is the protocol for low-level 1D interpolation, without any boundary conditions. It provides methods for interpolation to generic location as well as for the special case of applying shifts.
RegularInterpLagrange provides an implementation of Lagrange interpolation.
RegularInterpLinear provides an implementation of linear interpolation.
RegularInterpolator wraps a RegularInterpCore instance and performs parameter checks common to all interpolators, such as data types and dimensionality.
For the Lagrange interpolation, we use an implementation written from scratch. The choice of the algorithm is based on the main use case, time shifting. For this use case, there are roughly as many interpolated points as there are data samples, and the interpolated points are more or less evenly distributed. This is in the middle between two extreme scenarios. The first one would be sparsely distributed interpolation points. In this case, most of the Lagrange polynomials one can construct around the sample points are not needed. An algorithm optimized for this case should thus only construct the Polynomials needed for the few samples. If few of the samples are close together, this means it could just set up one Polynomial per interpolation location. The other extreme would be given by interpolation locations covering the range more densely than the sample points. In this case, many interpolation locations will require the same Lagrange polynomial. It then makes sense to set up each possible Polynomial exactly once and use it for all points that require it.
Our implementation does the latter, i.e. it sets up all Lagrange polynomials once.
Then it evaluates for each interpolation location the polynomial that has the center
closest to the interpolation location. Furthermore, we
broke this task down to the application of a number of FIR filters, as described
here lisainstrument.sigpro.regular_interpolators.RegularInterpLagrange.
Since we use the FIR filtering from the fir_filters_numpy module, any optimization
of the FIR filters will automatically improve the Lagrange interpolation speed as well.
Time shifting
Time shifting is a special case of interpolation and it is the main use case in the simulator. The streams package impements dynamic time shifts using the methods provided by the regular_interpolators module.
To optimize for the important case of constant time shifts, the fixed_shift_numpy module provides suitable methods. As for generic interpolation, the interface for fixed shift is split into one for the core time shifting task, not taking into account boundary conditions, and another one including boundary conditions. The former is used in the streams package, which offers a stream type for fixed shifts. The latter can be used to work with ordinary numpy arrays, but it is not needed in the simulator. The core components are
FixedShiftCore protocol for fixed shift interpolation without boundary conditions
FixedShiftLagrange implements fixed shift Lagrange interpolation
The implementation of fixed Lagrange interpolation is based on FIR filters, created by the make_fir_lagrange_fixed function. The FIR filtering is delegated to the tools in module fir_filters_numpy.
Somewhat confusingly, the FixedShiftCore protocol does not have a parameter for the shift. The shift is left to the implementation, for FixedShiftLagrange it is specified during instantiation. It is instructive to point out the motivation: for use in the streams package, the domain of dependence of the interpolated sequences on the input sequence needs to be known in advance. Therefore, the FixedShiftCore protocol includes properties for the margins required in the input samples around the interpolation target range.
For direct use with boundary conditions, there are convenience wrapper
FixedShiftNumpy wraps FixedShiftCore interpolators and applies boundary conditions
lisainstrument.dynamic_delay_numpy.ShiftBC enum class for boundary condition choices
make_fixed_shift_lagrange_numpy create a FixedShiftNumpy with Lagrange interpolator
The FixedShiftNumpy shift method does have a parameter for the shift value, and creates a FixedShiftCore instance for each call.
Further, there is another wrapper that accepts numpy arrays or scalar values for the shift and uses dynamic or fixed interpolators accordingly. It is called AdaptiveShiftNumpy and can be found in the adaptive_delay_numpy module. This interface is not used in the simulator but may be useful for other uses.
Time shift inversion
The shift_inversion_numpy module provides methods to invert a coordinate time transformation given as a time shift. This is a rather specialized module created to solve such a task in the simulator. The streams package has a corresponding stream type to provide the shift inversion for chunked processing.
The shift inversion is formulated as a fixed point problem for an iteration involving the application of a time delay. The solution is computed by means of a fixed-point iteration. For the interpolation one can use any interpolator providing as RegularInterpolator instance. In particular, on can use the Lagrange interpolation. There is no abstract interface/protocol for the shift inversion, instead one directly uses the ShiftInverseNumpy class. To create such a class employing Lagrange interpolator, one can use the make_shift_inverse_lagrange_numpy convenience function.