Discussion:
[LAD] Jitter analysis
benravin
2017-09-17 05:17:04 UTC
Permalink
Hi,

I'm facing a timing jitter which happens periodically due to some interrupt,
which is causing the task to be delayed. Since this happens periodically, it
is indeed a slow varying timing jitter, for example every 400ms, the timing
deviation is in the order of few ms ( 4-6ms). This is not getting filtered
out by DLL, and results in a slow varying oscillations which never dies.

Any way to identify and limit these timing jitters and not to take any
action on drift correction by DLL ?

-ben



--
Sent from: http://linux-audio.4202.n7.nabble.com/linux-audio-dev-f58952.html
David Olofson
2017-09-17 13:03:39 UTC
Permalink
On Sun, Sep 17, 2017 at 7:17 AM, benravin <***@outlook.com> wrote:
[...]
Post by benravin
is indeed a slow varying timing jitter, for example every 400ms, the timing
[...]

Context and environment...?

Is there by any chance sample rate conversion going on somewhere?
(Hardware or software; usually behaves in about the same manner.)
Since buffer sizes in most environments need to stay fixed, and usually
also have further restrictions, this tends to affect buffer/callback timing.
As a result, input-to-process and process-to-output latency drift over
time, and pop back (buffer drop, or extra buffer) on a regular basis.
--
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.net http://olofsonarcade.com |
'---------------------------------------------------------------------'
benravin
2017-09-17 14:53:46 UTC
Permalink
On Sun, Sep 17, 2017 at 7:17 AM, benravin &lt;
[...]
Post by benravin
is indeed a slow varying timing jitter, for example every 400ms, the timing
[...]
Context and environment...?
Is there by any chance sample rate conversion going on somewhere?
(Hardware or software; usually behaves in about the same manner.)
Since buffer sizes in most environments need to stay fixed, and usually
also have further restrictions, this tends to affect buffer/callback timing.
As a result, input-to-process and process-to-output latency drift over
time, and pop back (buffer drop, or extra buffer) on a regular basis.
--
//David Olofson - Consultant, Developer, Artist, Open Source Advocate
.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.net http://olofsonarcade.com |
'---------------------------------------------------------------------'
_______________________________________________
Linux-audio-dev mailing list
https://lists.linuxaudio.org/listinfo/linux-audio-dev
Yes, sample rate conversion is done to adjust the drift in sample rate. But
this periodic timing errors are making the system to oscillate.



-----
-ben
--
Sent from: http://linux-audio.4202.n7.nabble.com/linux-audio-dev-f58952.html
Fons Adriaensen
2017-09-17 17:09:51 UTC
Permalink
Post by benravin
I'm facing a timing jitter which happens periodically due to some interrupt,
which is causing the task to be delayed. Since this happens periodically, it
is indeed a slow varying timing jitter, for example every 400ms, the timing
deviation is in the order of few ms ( 4-6ms). This is not getting filtered
out by DLL, and results in a slow varying oscillations which never dies.
Any way to identify and limit these timing jitters and not to take any
action on drift correction by DLL ?
It's impossible to say anything about this if you don't provide
numbers. How big is the resulting resampling ratio variation ?

If a few ms jitter leads to anything perceptible then your DLL
and/or resampling control loop are not dimensioned correctly,
or there is another basic problem with your design.

Ciao,
--
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
benravin
2017-09-19 16:45:34 UTC
Permalink
Please find the attached XL sheet which has the data of read & write timings.

Both read and write are of same length

alpha & beta coefficients are
float c1 = 0.017771532; /* alpha */
float c2 = 0.000157914; /* beta */

Audio Control loop local FIFO has two buffers, 1920 samples/buffer

aud_ctrl_err.xlsx
<http://linux-audio.4202.n7.nabble.com/file/t2646/aud_ctrl_err.xlsx>
Post by Fons Adriaensen
Post by benravin
I'm facing a timing jitter which happens periodically due to some interrupt,
which is causing the task to be delayed. Since this happens periodically, it
is indeed a slow varying timing jitter, for example every 400ms, the timing
deviation is in the order of few ms ( 4-6ms). This is not getting filtered
out by DLL, and results in a slow varying oscillations which never dies.
Any way to identify and limit these timing jitters and not to take any
action on drift correction by DLL ?
It's impossible to say anything about this if you don't provide
numbers. How big is the resulting resampling ratio variation ?
If a few ms jitter leads to anything perceptible then your DLL
and/or resampling control loop are not dimensioned correctly,
or there is another basic problem with your design.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
_______________________________________________
Linux-audio-dev mailing list
https://lists.linuxaudio.org/listinfo/linux-audio-dev
-----
-ben
--
Sent from: http://linux-audio.4202.n7.nabble.com/linux-audio-dev-f58952.html
benravin
2017-09-23 02:47:26 UTC
Permalink
Please find the attached block diagram of how the control loop is placed in
the whole system.

In the figure the first block is the RF IQ acquisition block which samples
the RF samples and put a timestamp. It is then passed on to channel and
audio decoder and finally reaches the audio sink. Audio sink does the audio
playback on fragments of audio.

The audio control loop module has two inputs and one output. The inputs are
for sending the timestamp of write side and read side. In this case the
write side is RF capture and read is from audio sink. Note these two time
stamps are coming from different clock, the RF capture uses PC CPU clock
where as the audio sink has sound card clock. The output of audio control
loop is used to control the re sampler which sits in between audio decoder
and audio sink.

AudCtrlLoop.JPG
<Loading Image...>

-ben
Post by Fons Adriaensen
Post by benravin
I'm facing a timing jitter which happens periodically due to some interrupt,
which is causing the task to be delayed. Since this happens periodically, it
is indeed a slow varying timing jitter, for example every 400ms, the timing
deviation is in the order of few ms ( 4-6ms). This is not getting filtered
out by DLL, and results in a slow varying oscillations which never dies.
Any way to identify and limit these timing jitters and not to take any
action on drift correction by DLL ?
It's impossible to say anything about this if you don't provide
numbers. How big is the resulting resampling ratio variation ?
If a few ms jitter leads to anything perceptible then your DLL
and/or resampling control loop are not dimensioned correctly,
or there is another basic problem with your design.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
_______________________________________________
Linux-audio-dev mailing list
https://lists.linuxaudio.org/listinfo/linux-audio-dev
-----
-ben
--
Sent from: http://linux-audio.4202.n7.nabble.com/linux-audio-dev-f58952.html
benravin
2017-10-06 18:03:01 UTC
Permalink
Post by Fons Adriaensen
It's impossible to say anything about this if you don't provide
numbers. How big is the resulting resampling ratio variation ?
If a few ms jitter leads to anything perceptible then your DLL
and/or resampling control loop are not dimensioned correctly,
or there is another basic problem with your design.
Actually I'm working on Embedded Linux. I have a requirement to configure
the SoC as audio output master or slave. If I use the SoC audio output as
slave and DAC as master then zita-ajbridge will work as expected.

But if SoC audio output is configured as master and DAC as slave and use the
SoC system clock for timestamping the write and read buffers, then any drift
on SoC clock will not be detected.

For example:

At the write side, t_wA and t_wB are the timestamps of buffer1 and buffer2
and read side t_rA and t_rB

t_w = t_wB - t_wA
t_r = t_rB - t_rA

error = (t_r – t_w ),

This error is minimized by audio control loop.

And if SoC is configured as master then any drift on write side will be
there on read side as well. How to detect and correct it ?

-ben







-----
-ben
--
Sent from: http://linux-audio.4202.n7.nabble.com/linux-audio-dev-f58952.html
Fons Adriaensen
2017-10-07 21:37:02 UTC
Permalink
Post by benravin
Post by Fons Adriaensen
It's impossible to say anything about this if you don't provide
numbers. How big is the resulting resampling ratio variation ?
If a few ms jitter leads to anything perceptible then your DLL
and/or resampling control loop are not dimensioned correctly,
or there is another basic problem with your design.
Actually I'm working on Embedded Linux. I have a requirement to configure
the SoC as audio output master or slave. If I use the SoC audio output as
slave and DAC as master then zita-ajbridge will work as expected.
But if SoC audio output is configured as master and DAC as slave and use the
SoC system clock for timestamping the write and read buffers, then any drift
on SoC clock will not be detected.
Apparently you try to implement something similar to zita-ajbridge
or zita-njbridge, or at least based on the algorithm described in
my LAC paper. Since at least November 2016 I've been trying to
help you, and I remember having spent hours writing emails to
explain things. But each time I've done so, you come back a week
or so later, telling me that you have tried something different,
and that is doesn't work.
Post by benravin
At the write side, t_wA and t_wB are the timestamps of buffer1 and buffer2
and read side t_rA and t_rB
t_w = t_wB - t_wA
t_r = t_rB - t_rA
error = (t_r – t_w ),
This error is minimized by audio control loop.
And if SoC is configured as master then any drift on write side will be
there on read side as well. How to detect and correct it ?
This is not how the algorithm of zita-[an]jbridge works. It doesn't
try to minimise t_r - t_w whatever those are, and it doesn't use
fixed-size reads and writes on the resampler side of the buffer.
I've told you already a number of times that it couldn't work that
way and explained why. Together with a lot of other things which
you consistently choose to ignore.

When I ask for more info on how your thing works you never provide
any useful information. All I get are some details which are pretty
useless without the full context.

This has been going on for at least 10 months now, and it ends
here as far as I'm concerned.

Ciao,
--
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
Loading...