Upsampling/oversampling - Smooth or stairs curve? Between values determiniation?


#1

Do you calculate interim values when you upsample? If not, you will have a stairway no?

If you calculate the in between values, how do calculate them when the curve “curves”? How do you “connect the dots”?

What don’t I understand here?

Peace
Bruce in Philly


Best SQ ROON or HQ Player into DSJnr?
#2

Some DACs use various forms of interpolation (none, linear, higher order polynomials, cubic splines?, etc.)

But the canonical method is to add zeros between the current samples (for example if you are upsampling by 3 you’d add two zeros between each original sample. Then you use a filter with a cutoff at 1/2 of the original sample rate. That is required to avoid introducing aliasing, but looking at it in another way, if the signal is bandlimited (which is required for accurate sampling in the first place) then it can’t wiggle fast enough to get to the zeros and then back up to the non-zero values. Yet a third way to think of it: there is only one bandlimited signal that will go thru the original points (once again that’s what accurate sampling is all about in the first place.) After you have that unique curve you can resample it at any rate higher than the original and still have all of the original information available and will still be able to generate that unique curve.

One point I should make clear: the above is how things work in theory, but in reality there’s no perfect filter nor a perfectly bandlimited signal (a truly bandlimited signal can’t also be timelimited…) and hence the art of implementing upsampling is in the picking of the interpolation function or of the filter used in upsampling. Different assumptions about what approximations can be done (i.e. which features of the original signal are most important) will lead to differing filter (or interpolator) implementations.

There are a lot of reasonable You Tube videos out there about the sampling theorem that may make things more clear. Also many universities have lectures from various signal processing courses available (the MIT OpenCourceWare comes to mind.) Note that the sampling theorem is like quantum mechanics or Gödel’s incompleteness theorem: there’s a lot of confusion out there, people argue all of the time (even with no background in the subject at hand) and it takes a while to get your head around it.


#3

Interesting…

I guess I am missing something… if you doubled the rate and created an interpolated midpoint sample, wouldn’t that alleviate the need for a filter or maybe make the system less reliant on one? Doesn’t this approach “take the edge off” per se?

Peace
Bruce in Philly


#4

Averaging two samples (midpoint sample) is a filter. A not horrible filter, but not great either.

Here’s what a signal at about 4/5 of the max frequency would look like using linear interpolation vs an ideal sin(x)/x interpolation:
(The red line is the linear interpolation, the triangles are sin(x)/x)


[From “Sin(x)/x Interpolation: An Important Aspect of Proper Oscilloscope Measurements” by By Chris Rehorn, Agilent Technologies, Figure 7, p 8]

With linear interpolation all samples at any new sampling rate would be on the red line. The triangles are an example of where they’d be doing a much higher sample rate using sin(x)/x as the reconstruction filter. If you used a different sample rate they would still fall on the original analog waveform. (There are details in the real world, but you get the idea.)


#5

Here’s more detail:

The original bandlimited input waveform is in black.

The samples are the vertical black lines with * at their ends (at all integers for this example.)

To do a reconstruction you take each sample, scale a whole sin(x * pi)/(x * pi) curve by the sample’s value and add it to the output result at the sample’s time. (I’ll write sinc(x) instead of sin(x * pi) / ( x * pi) below, it’s a slight abuse of terminology, but it doesn’t change the results.)

The magenta, green and blue curves are the scaled sinc(x) at the samples at -1, 0 and 1.

I sum all of the sinc(x) curves scaled by all samples and plot that as red (which overlays the input black curve as one might expect.)

sinc(x) when evaluated at integers is everywhere 0 except at x = 0 where it’s 1. So when we scale it by a particular sample (say, the sample at 1 giving the blue curve) the only sample point that is affected is the sample at 1. For example you can see in the zoomed in plot that all of the sinc(x) curves are going thru 0 at the sample at 2.

The sin(x * pi)/(x * pi) function isn’t arbitrary, it’s what you need to use to filter the input at exactly 1/2 of the sample rate. If you want a different filter it will change the function.

For reference the cyan is what linear interpolation would give, note that it misses badly on the right.


#6

Damn cool, Ted!

Thanks


#7

That’s excellent, Ted.


#8

SO… I guess where I had my hangup is in the difference between what an analog filter and a digital filter does… or more specifically, how a digital filter works. You threw me when you noted “averaging two samples (midpoint sample) is a filter”. I did not look at that as a filter, but as an improvement in accuracy… the more accurate representation of a waveform in the digital domain. It always seemed odd to me that a resistor ladder DAC chip would produce a smooth waveform thus a desire for more data points or more resistors with finer values.

So interpolation of intermediate points is a filter… I never thought of it this way.

Peace
Bruce in Philly