Discussion:
[LAD] Is -ffast-math safe for audio?
Will Godfrey
2018-11-22 17:30:42 UTC
Permalink
While testing some mixed floating point and integer calculations I found a
quite surprising difference when this compiler option was set (gcc 6.x). It was
clearly different at only 100 iterations and got dramatically worse with
larger counts.

My test routine was this:

int a = 0;
float b = 0;
float c = 0;
float inc = 0.1f;
float dec = 0.05f;
int it = 100;
for (int i = 0; i < it; ++ i)
{
a = (int)truncf(b);
c = b - floorf(b);
b += inc;
a = (int)truncf(b);
c = b - floorf(b);
b -= dec;
}

cout << "int " << a << " rem " << c << endl;

My suspicion is that the difference is due to accumulated rounding errors.

Curiously without the decrements the behavior with and without -ffast-math
seems to be identical well into the millions.
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hermann Meyer
2018-11-22 17:57:15 UTC
Permalink
Post by Will Godfrey
While testing some mixed floating point and integer calculations I found a
quite surprising difference when this compiler option was set (gcc 6.x). It was
clearly different at only 100 iterations and got dramatically worse with
larger counts.
int a = 0;
float b = 0;
float c = 0;
float inc = 0.1f;
float dec = 0.05f;
int it = 100;
for (int i = 0; i < it; ++ i)
{
a = (int)truncf(b);
c = b - floorf(b);
b += inc;
a = (int)truncf(b);
c = b - floorf(b);
b -= dec;
}
cout << "int " << a << " rem " << c << endl;
My suspicion is that the difference is due to accumulated rounding errors.
Curiously without the decrements the behavior with and without -ffast-math
seems to be identical well into the millions.
In the guitarix project we've disabled  -ffast-math several years ago,
when I remember right it was at gcc3, as it could lead to different
un-reproduciable calculations. Last option I've disabled on gcc8 now, is
-ffinite-math-only, this one leads to nan's and inf's in several cases,
which been as well not reproducible.
David Runge
2018-11-22 18:28:58 UTC
Permalink
Post by Hermann Meyer
In the guitarix project we've disabled  -ffast-math several years ago,
when I remember right it was at gcc3, as it could lead to different
un-reproduciable calculations. Last option I've disabled on gcc8 now,
is -ffinite-math-only, this one leads to nan's and inf's in several
cases, which been as well not reproducible.
Rabbit hole stuff! SuperCollider came to a similar conclusion:
https://github.com/supercollider/supercollider/issues/4116
--
https://sleepmap.de
Will J Godfrey
2018-11-22 19:58:57 UTC
Permalink
On Thu, 22 Nov 2018 19:28:58 +0100
Post by David Runge
In the guitarix project we've disabled  -ffast-math several years ago,
when I remember right it was at gcc3, as it could lead to different
un-reproduciable calculations. Last option I've disabled on gcc8 now,
is -ffinite-math-only, this one leads to nan's and inf's in several
cases, which been as well not reproducible.
https://github.com/supercollider/supercollider/issues/4116
Thanks a lot people. I kept thinking I was doing something wrong!
--
It wasn't me! (Well actually, it probably was)

... the hard part is not dodging what life throws at you,
but trying to catch the good bits.
Robin Gareus
2018-11-22 20:12:48 UTC
Permalink
Post by David Runge
https://github.com/supercollider/supercollider/issues/4116
This is a different issue. In SuperCollider's case they do want NaN and
not finite-math, that is not usually the case for audio.

If you have perform a calculation that can produce NaN or infinite
values or needs to distinguish +/-0 in your DSP code, there is a much
bigger issue.

ciao,
robin
Fons Adriaensen
2018-11-22 20:07:36 UTC
Permalink
Post by Will Godfrey
While testing some mixed floating point and integer calculations I found a
quite surprising difference when this compiler option was set (gcc 6.x). It was
clearly different at only 100 iterations and got dramatically worse with
larger counts.
...
In the guitarix project we've disabled  -ffast-math several years ago, when
I remember right it was at gcc3, as it could lead to different
un-reproduciable calculations. Last option I've disabled on gcc8 now, is
-ffinite-math-only, this one leads to nan's and inf's in several cases,
which been as well not reproducible.
-ffast-math should be perfectly OK for any normal audio DSP code.
The only use case where I'd consider not using it is for precision
measurement applications.

If using it leads to unexpected results that means that the algorithm
used is itself numerically unstable. A classic example of this is the
'biquad' and similar structures when used for filters that operate
mainly on low frequencies. Unfortunately, that sort of code is
everywhere, cut and pasted from one bad example to the next.

Strange that this turns up right now. Earlier today I was implementing
an IIR filter originally designed in MatLab which uses doubles for
everything. When the coefficients were truncated to 32-bit floats,
the filter became unstable. This doesn't mean that such a filter
needs double precision. It just indicates bad design.

Ciao,
--
FA
Hermann Meyer
2018-11-22 20:27:58 UTC
Permalink
Post by Fons Adriaensen
Post by Will Godfrey
While testing some mixed floating point and integer calculations I found a
quite surprising difference when this compiler option was set (gcc 6.x). It was
clearly different at only 100 iterations and got dramatically worse with
larger counts.
...
In the guitarix project we've disabled  -ffast-math several years ago, when
I remember right it was at gcc3, as it could lead to different
un-reproduciable calculations. Last option I've disabled on gcc8 now, is
-ffinite-math-only, this one leads to nan's and inf's in several cases,
which been as well not reproducible.
-ffast-math should be perfectly OK for any normal audio DSP code.
The only use case where I'd consider not using it is for precision
measurement applications.
If using it leads to unexpected results that means that the algorithm
used is itself numerically unstable. A classic example of this is the
'biquad' and similar structures when used for filters that operate
mainly on low frequencies. Unfortunately, that sort of code is
everywhere, cut and pasted from one bad example to the next.
Strange that this turns up right now. Earlier today I was implementing
an IIR filter originally designed in MatLab which uses doubles for
everything. When the coefficients were truncated to 32-bit floats,
the filter became unstable. This doesn't mean that such a filter
needs double precision. It just indicates bad design.
Ciao,
In guitarix nearly all DSP is generated by FAUST.

However, problems with NAN's and INF's when use -ffinite-math-only
occurs only when we save preset values to file, and only sometimes then.
Without  -ffinite-math-only never ever.
Fons Adriaensen
2018-11-22 21:41:45 UTC
Permalink
Post by Hermann Meyer
In guitarix nearly all DSP is generated by FAUST.
That doesn't make any difference for numerical stability.
This is a property of an algorithm, not of the language.

Ciao,
--
FA
Hermann Meyer
2018-11-24 07:40:11 UTC
Permalink
Post by Fons Adriaensen
Post by Hermann Meyer
In guitarix nearly all DSP is generated by FAUST.
That doesn't make any difference for numerical stability.
This is a property of an algorithm, not of the language.
Ciao,
Just as a side note, we've no problem in guitarix when we use float
instead of double. I've done so for some ARM ports.
Robin Gareus
2018-11-22 22:44:31 UTC
Permalink
Post by Hermann Meyer
However, problems with NAN's and INF's when use -ffinite-math-only
occurs only when we save preset values to file, and only sometimes then.
A shot in the dark..

Serializing a float in most parts of the world uses comma as decimal
separator. e.g. "0,5"

Reading this in the C-locale (dot as decimal separator) with c/c++
standard library functions returns 0. This may lead to a division by
zero later.

Cheers!
robin
Hermann Meyer
2018-11-23 02:56:55 UTC
Permalink
Post by Robin Gareus
Post by Hermann Meyer
However, problems with NAN's and INF's when use -ffinite-math-only
occurs only when we save preset values to file, and only sometimes then.
A shot in the dark..
Serializing a float in most parts of the world uses comma as decimal
separator. e.g. "0,5"
Reading this in the C-locale (dot as decimal separator) with c/c++
standard library functions returns 0. This may lead to a division by
zero later.
Cheers!
robin
that happen during write, not during read. During read, nan's and inf's
will be replaced by standard values when detected, but it mean's that
original settings been lost.

The values been saved as 0.5 with a dot, the comma is used as separator
between the entry's.

(and no error with entry separation ever happen)

It could happen as well to values which may ain't been used/changed
during a session.

And it happen in a non-reproducible way. Very strange, and indeed only
when -ffinite-math-only

is turned on.

I've tried for weeks to find out what really happen, but could only nail
it to -ffinite-math-only.

-ffast-math, on the other hand changes the resulting sound, it's
minimal, but, notable.

The only flag from the -ffast-math macro we still use is -fno-math-errno.


regards

hermann
Hermann Meyer
2018-11-23 04:09:07 UTC
Permalink
Post by Hermann Meyer
Post by Robin Gareus
Post by Hermann Meyer
However, problems with NAN's and INF's when use -ffinite-math-only
occurs only when we save preset values to file, and only sometimes then.
A shot in the dark..
Serializing a float in most parts of the world uses comma as decimal
separator. e.g. "0,5"
Reading this in the C-locale (dot as decimal separator) with c/c++
standard library functions returns 0. This may lead to a division by
zero later.
Cheers!
robin
that happen during write, not during read. During read, nan's and
inf's will be replaced by standard values when detected, but it mean's
that original settings been lost.
The values been saved as 0.5 with a dot, the comma is used as
separator between the entry's.
(and no error with entry separation ever happen)
It could happen as well to values which may ain't been used/changed
during a session.
And it happen in a non-reproducible way. Very strange, and indeed only
when -ffinite-math-only
is turned on.
I've tried for weeks to find out what really happen, but could only
nail it to -ffinite-math-only.
-ffast-math, on the other hand changes the resulting sound, it's
minimal, but, notable.
The only flag from the -ffast-math macro we still use is -fno-math-errno.
regards
hermann
As a side note, in my side project GxPlugins.lv2 I still use -ffast-math
without problems.
Robin Gareus
2018-11-22 22:29:11 UTC
Permalink
Hi Will,

I just ran your code and -ffast-math does not make any difference.

With or without --ffast-math I get "int: 5 rem: 0.049994"

However optimizing the code with `-O2 --ffast-math` does make a
difference because SSE is used.

Do you also use -O2, or -O3 along with --fast-math?

On 11/22/2018 06:30 PM, Will Godfrey wrote:
[...]
Post by Will Godfrey
My suspicion is that the difference is due to accumulated rounding errors.
Curiously without the decrements the behavior with and without
-ffast-math seems to be identical well into the millions.
Yep. you do loose precision. In IEEE 32bit float, there is no 0.1.

0.1 would be approximated as (13421773) * 2^(-27), and 0.05 as
(13421773) * 2^(-28) and truncated.

Note that this is an odd number. The last bit of the mantissa
(0x004ccccd) is lost when the exponent changes.

A simpler example to show this is

#include <stdio.h>
#include <math.h>
int main (int argc, char** argv) {
float a = 0;
for (int i = 0; i < 100; ++i) {
a += 0.1f;
a -= 0.05f;
a = fmodf (a, 1.f);
}
printf ("%f\n", a);
return 0;
}

using gcc 6.3.0-18+deb9u1, x86_64, this
prints 1.000000 (when compiled with -O0)
and 0.000001 (when compiled with -O2 --fast-math)


The difference here is that the former calls fmodf(), while in the
latter, optimized, case the compiler uses cvtss2sd SSE instead. The
internal limits of float precision differ for those cases.


--ffast-math issues in audio-dsp are usually due to re-ordering and
reciprocal approximations. e.g. instead of (x / 2) the compiler calls
(0.5 * x), which can lead to different results if the inverse value does
not a precise floating point representation. e.g. 1.0 / 0.1 != 10.0

A good read on the subject is
http://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf (Goldberg,
David: "What Every Computer Scientist Should Know About Floating-Point
Arithmetic").

If you were change the two constants to
a += 0.5f;
a -= 0.25f;
the issue vanishes.

Cheers!
robin
Will Godfrey
2018-11-23 12:00:40 UTC
Permalink
On Thu, 22 Nov 2018 23:29:11 +0100
Post by Robin Gareus
Hi Will,
I just ran your code and -ffast-math does not make any difference.
With or without --ffast-math I get "int: 5 rem: 0.049994"
However optimizing the code with `-O2 --ffast-math` does make a
difference because SSE is used.
Do you also use -O2, or -O3 along with --fast-math?
[...]
Post by Will Godfrey
My suspicion is that the difference is due to accumulated rounding errors.
Curiously without the decrements the behavior with and without
-ffast-math seems to be identical well into the millions.
Yep. you do loose precision. In IEEE 32bit float, there is no 0.1.
0.1 would be approximated as (13421773) * 2^(-27), and 0.05 as
(13421773) * 2^(-28) and truncated.
Note that this is an odd number. The last bit of the mantissa
(0x004ccccd) is lost when the exponent changes.
A simpler example to show this is
#include <stdio.h>
#include <math.h>
int main (int argc, char** argv) {
float a = 0;
for (int i = 0; i < 100; ++i) {
a += 0.1f;
a -= 0.05f;
a = fmodf (a, 1.f);
}
printf ("%f\n", a);
return 0;
}
using gcc 6.3.0-18+deb9u1, x86_64, this
prints 1.000000 (when compiled with -O0)
and 0.000001 (when compiled with -O2 --fast-math)
The difference here is that the former calls fmodf(), while in the
latter, optimized, case the compiler uses cvtss2sd SSE instead. The
internal limits of float precision differ for those cases.
--ffast-math issues in audio-dsp are usually due to re-ordering and
reciprocal approximations. e.g. instead of (x / 2) the compiler calls
(0.5 * x), which can lead to different results if the inverse value does
not a precise floating point representation. e.g. 1.0 / 0.1 != 10.0
A good read on the subject is
http://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf (Goldberg,
David: "What Every Computer Scientist Should Know About Floating-Point
Arithmetic").
If you were change the two constants to
a += 0.5f;
a -= 0.25f;
the issue vanishes.
Cheers!
robin
Thanks for going into this in such detail Robin. I never realised fp stuff
could be *quite* so, umm, approximate!

I was using O3. Following on from this I found that if I removed either O3 or
ffast-math the problem disappeared.

It's quite possible that I'm over-thinking this as there are actually only a
few iterations before the initial figures are re-calculated, but I don't like
mysteries :(
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Robin Gareus
2018-11-23 13:09:02 UTC
Permalink
On 11/23/2018 01:00 PM, Will Godfrey wrote:
[...]
Post by Will Godfrey
Thanks for going into this in such detail Robin. I never realised fp stuff
could be *quite* so, umm, approximate!
Depending on context and the maths, the difference may not matter at
all, or may be off completely..

float a = (1 + 1e20) - 1e20;
float b = 1 + 1e20; b -= 1e20;
printf ("%f %f\n", a, b);

Any guesses what this code prints without compiling it? :)
Post by Will Godfrey
I was using O3. Following on from this I found that if I removed either O3 or
ffast-math the problem disappeared.
OK, so SSE/AVX/compiler-intrinsics are the issue at hand.
Post by Will Godfrey
It's quite possible that I'm over-thinking this as there are actually only a
few iterations before the initial figures are re-calculated, but I don't like
mysteries :(
You could also disable this for a specific function only

__attribute__((optimize("-fno-fast-math")))
void different_mysteries_here (float val)
{
...
}

ciao,
robin
Gordonjcp
2018-11-23 14:18:01 UTC
Permalink
Post by Robin Gareus
[...]
Post by Will Godfrey
Thanks for going into this in such detail Robin. I never realised fp stuff
could be *quite* so, umm, approximate!
Depending on context and the maths, the difference may not matter at
all, or may be off completely..
Surely the answer is just to use 16-bit ints for everything, then...?
--
Gordonjcp
Nikita Zlobin
2018-11-23 17:33:24 UTC
Permalink
In Fri, 23 Nov 2018 14:18:01 +0000
Post by Gordonjcp
Post by Robin Gareus
[...]
Post by Will Godfrey
Thanks for going into this in such detail Robin. I never realised
fp stuff could be *quite* so, umm, approximate!
Depending on context and the maths, the difference may not matter at
all, or may be off completely..
Surely the answer is just to use 16-bit ints for everything, then...?
Let me note - it would be at least 32-bit int, since in 32bit float
significand is 23bit.

Of course, if clipping-resistance is not important (special precaucions
like keeping signal level in range of e.g. -6db or -12db could emulate
it).
Gordonjcp
2018-11-23 17:56:38 UTC
Permalink
Post by Nikita Zlobin
In Fri, 23 Nov 2018 14:18:01 +0000
Post by Gordonjcp
Post by Robin Gareus
[...]
Post by Will Godfrey
Thanks for going into this in such detail Robin. I never realised
fp stuff could be *quite* so, umm, approximate!
Depending on context and the maths, the difference may not matter at
all, or may be off completely..
Surely the answer is just to use 16-bit ints for everything, then...?
Let me note - it would be at least 32-bit int, since in 32bit float
significand is 23bit.
We don't need that much precision, we only have 10-bit ears.
--
Gordonjcp
Nikita Zlobin
2018-11-23 18:30:47 UTC
Permalink
In Fri, 23 Nov 2018 17:56:38 +0000
Post by Gordonjcp
Post by Nikita Zlobin
In Fri, 23 Nov 2018 14:18:01 +0000
Post by Gordonjcp
Post by Robin Gareus
[...]
Post by Will Godfrey
Thanks for going into this in such detail Robin. I never
realised fp stuff could be *quite* so, umm, approximate!
Depending on context and the maths, the difference may not
matter at all, or may be off completely..
Surely the answer is just to use 16-bit ints for everything, then...?
Let me note - it would be at least 32-bit int, since in 32bit float
significand is 23bit.
We don't need that much precision, we only have 10-bit ears.
Oh... when you wrote "for everything" - i thought, it includes
processing too, not just listening.
Tim
2018-11-23 18:37:37 UTC
Permalink
Post by Gordonjcp
Post by Robin Gareus
[...]
Post by Will Godfrey
Thanks for going into this in such detail Robin. I never realised fp stuff
could be *quite* so, umm, approximate!
Depending on context and the maths, the difference may not matter at
all, or may be off completely..
Surely the answer is just to use 16-bit ints for everything, then...?
I've said before: Bankers forbid fp and use BCD math,
otherwise pennies go missing which accumulate into dollars.

Unless Will's requirement is scientific or astronomical,
yet still requires a fairly wide range, I suggest 128-bit int math.
It may not be useful with plugin or jack audio paths, which are
always fp, but for other things like time it really helps.

Until several months ago MusE used fp to represent time, such as
wall-clock time and jack time etc. It caused many timing problems.
So I ripped up all of that and used 128-bit int math instead.

You just have to carefully determine your required range and
change the units that it represents. In MusE's case, the time
value now represents 128-bit int nanoseconds instead of fp seconds.

128-bit math is not supported fully yet on all platforms
especially 32-bit platforms. So I used a third-party library
'libdivide' which handles the cross-platform stuff.
I made a convenient routine which does A * B / C in 128-bit math
since that is by far the most common general usage in our app.

Tim.
Nikita Zlobin
2018-11-23 17:06:24 UTC
Permalink
It could be lame question, but still... is it possible, that some
implementations (compiler/hardware) will print 1? (even 64bit doesn't
hold 20 decimal digits).?

In Fri, 23 Nov 2018 14:09:02 +0100
Post by Robin Gareus
[...]
Post by Will Godfrey
Thanks for going into this in such detail Robin. I never realised
fp stuff could be *quite* so, umm, approximate!
Depending on context and the maths, the difference may not matter at
all, or may be off completely..
float a = (1 + 1e20) - 1e20;
float b = 1 + 1e20; b -= 1e20;
printf ("%f %f\n", a, b);
Any guesses what this code prints without compiling it? :)
Post by Will Godfrey
I was using O3. Following on from this I found that if I removed
either O3 or ffast-math the problem disappeared.
OK, so SSE/AVX/compiler-intrinsics are the issue at hand.
Post by Will Godfrey
It's quite possible that I'm over-thinking this as there are
actually only a few iterations before the initial figures are
re-calculated, but I don't like mysteries :(
You could also disable this for a specific function only
__attribute__((optimize("-fno-fast-math")))
void different_mysteries_here (float val)
{
...
}
ciao,
robin
_______________________________________________
Linux-audio-dev mailing list
https://lists.linuxaudio.org/listinfo/linux-audio-dev
Nikita Zlobin
2018-11-23 17:37:08 UTC
Permalink
It's probably gonna be silly question, but after short analysis i don't
see, what could be broken in this demo snippet, when float is standard
single-precision 32bit type.

I omit first case, as optimizing compiler could just optimize it to
just =1; though it could do it in second case too... (as assumed, i did
not try to compile it yet).
But even without underhood tricks - 32bit float has 23/24bit for
significand, thus

In Fri, 23 Nov 2018 14:09:02 +0100
Post by Robin Gareus
[...]
Post by Will Godfrey
Thanks for going into this in such detail Robin. I never realised
fp stuff could be *quite* so, umm, approximate!
Depending on context and the maths, the difference may not matter at
all, or may be off completely..
float a = (1 + 1e20) - 1e20;
float b = 1 + 1e20; b -= 1e20;
printf ("%f %f\n", a, b);
Any guesses what this code prints without compiling it? :)
Post by Will Godfrey
I was using O3. Following on from this I found that if I removed
either O3 or ffast-math the problem disappeared.
OK, so SSE/AVX/compiler-intrinsics are the issue at hand.
Post by Will Godfrey
It's quite possible that I'm over-thinking this as there are
actually only a few iterations before the initial figures are
re-calculated, but I don't like mysteries :(
You could also disable this for a specific function only
__attribute__((optimize("-fno-fast-math")))
void different_mysteries_here (float val)
{
...
}
ciao,
robin
_______________________________________________
Linux-audio-dev mailing list
https://lists.linuxaudio.org/listinfo/linux-audio-dev
Fons Adriaensen
2018-11-24 10:14:06 UTC
Permalink
Post by Robin Gareus
A simpler example to show this is
#include <stdio.h>
#include <math.h>
int main (int argc, char** argv) {
float a = 0;
for (int i = 0; i < 100; ++i) {
a += 0.1f;
a -= 0.05f;
a = fmodf (a, 1.f);
}
printf ("%f\n", a);
return 0;
}
using gcc 6.3.0-18+deb9u1, x86_64, this
prints 1.000000 (when compiled with -O0)
and 0.000001 (when compiled with -O2 --fast-math)
Actually 0.999999940 and 0.000000596.

The 1.000000 would imply that the fmodf (a, 1.0f) would be
plain wrong, but it isn't.

The examples shown in this thread all involve converting
floats to ints. Typical application of this in audio is
to convert a float index into a wavetable to an int index
and a float < 1 interpolation argument.

The dangerous thing to do is:

// given float p

int i = (int) floorf (p);
float f = fmodf (p, 1.0f);

as you could end up with i + f != p.

The safe way is of course:

int i = (int) floorf (p);
float f = p - i;


Ciao,
--
FA
Will Godfrey
2018-11-24 10:49:46 UTC
Permalink
On Sat, 24 Nov 2018 11:14:06 +0100
Fons Adriaensen <***@linuxaudio.org> wrote:
<snip>
Post by Fons Adriaensen
// given float p
int i = (int) floorf (p);
float f = fmodf (p, 1.0f);
as you could end up with i + f != p.
int i = (int) floorf (p);
float f = p - i;
Ciao,
Thanks very much for that Fons :)

I'd been mulling over *exactly* that point for some time. My reasoning being
that in the latter case, if the integer was slightly wrong then using it for
the subtraction should give a remainder that was also slightly wrong, but in a
direction that tends to compensate for the error.

The other thing, is why do we need the floorf? Or in the original example
(which was taken from working code) truncf?

A straight cast to int seems to give exactly the same result, and is at least
twice as fast with -O3 and about 4 times as fast unoptimised.
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Fons Adriaensen
2018-11-24 13:19:01 UTC
Permalink
Post by Will Godfrey
Post by Fons Adriaensen
int i = (int) floorf (p);
float f = p - i;
I'd been mulling over *exactly* that point for some time. My reasoning being
that in the latter case, if the integer was slightly wrong then using it for
the subtraction should give a remainder that was also slightly wrong, but in a
direction that tends to compensate for the error.
How can an int be 'slightly wrong' ? :-)
The advantage of the 'safe' way is that you always have p == i + f.
Post by Will Godfrey
The other thing, is why do we need the floorf? Or in the original example
(which was taken from working code) truncf?
A straight cast to int seems to give exactly the same result, and is at least
twice as fast with -O3 and about 4 times as fast unoptimised.
We want f >= 0, so rounding must be towards -inf. Casting will truncate
(i.e. round towards zero). This gives the correct result only if p >= 0.
That may be all you need, but I wouldn't like to depend on it.

There is a way to avoid all float to int conversions, at least outside
the per-sample loops.

Suppose you have a wavetable of size L, current position is float p,
and increment is float f. To generate N samples you'd have something
like:


for (i = 0; i < N; i++)
{
k = floorf (p);
u = p - k;

// use k, u to interpolate in wavetable

p += f;
if (p >= L) p -= L;
}

To avoid floorf() inside the loop, instead of maintaining p and f
as floats, split both of them from the start into an integer and
float part:

k = floorf (p);
u = p - k;

kf = floorf (f);
uf = f - kf;

for (i = 0; i < N; i++)
{
// use k, u to interpolate in wavetable

k += kf;
u += uf;
if (u >= 1.0f)
{
k += 1;
u -= 1;
}
if (k >= L) k -= L;
// or k &= L-1 if L is a power of 2.
}


Ciao,
--
FA
Will Godfrey
2018-11-25 10:18:06 UTC
Permalink
On Sat, 24 Nov 2018 14:19:01 +0100
Post by Fons Adriaensen
Post by Will Godfrey
Post by Fons Adriaensen
int i = (int) floorf (p);
float f = p - i;
I'd been mulling over *exactly* that point for some time. My reasoning being
that in the latter case, if the integer was slightly wrong then using it for
the subtraction should give a remainder that was also slightly wrong, but in a
direction that tends to compensate for the error.
How can an int be 'slightly wrong' ? :-)
Ummm, bad wording on my part. I meant out by 1. The difference would be made up
next period.
Post by Fons Adriaensen
The advantage of the 'safe' way is that you always have p == i + f.
Post by Will Godfrey
The other thing, is why do we need the floorf? Or in the original example
(which was taken from working code) truncf?
A straight cast to int seems to give exactly the same result, and is at least
twice as fast with -O3 and about 4 times as fast unoptimised.
We want f >= 0, so rounding must be towards -inf. Casting will truncate
(i.e. round towards zero). This gives the correct result only if p >= 0.
That may be all you need, but I wouldn't like to depend on it.
Well in this particular case, it can't be less than zero - notwithstanding time
travel :)
Post by Fons Adriaensen
There is a way to avoid all float to int conversions, at least outside
the per-sample loops.
Suppose you have a wavetable of size L, current position is float p,
and increment is float f. To generate N samples you'd have something
for (i = 0; i < N; i++)
{
k = floorf (p);
u = p - k;
// use k, u to interpolate in wavetable
p += f;
if (p >= L) p -= L;
}
To avoid floorf() inside the loop, instead of maintaining p and f
as floats, split both of them from the start into an integer and
k = floorf (p);
u = p - k;
kf = floorf (f);
uf = f - kf;
for (i = 0; i < N; i++)
{
// use k, u to interpolate in wavetable
k += kf;
u += uf;
if (u >= 1.0f)
{
k += 1;
u -= 1;
}
if (k >= L) k -= L;
// or k &= L-1 if L is a power of 2.
}
Ciao,
Interesting. I'll look into this.
--
Will J Godfrey
http://www.musically.me.uk
Just because you know what I'm talking about, it doesn't mean *I* do.
Continue reading on narkive:
Loading...