Discussion:
[ANN] sverb 0.90
(too old to reply)
Cedric Roux
2006-02-12 14:40:13 UTC
Permalink
Dear Linux audio people,

sverb 0.90 is out at:
http://sed.free.fr/sverb

sverb is an order 15 cfdn reverb.

Changes:
More presets were added.

If someone wants to contribute a ladspa support, it's welcome.
We could have one effect for each preset (with names like
"short reverb 1", "huge reverb").
Then, for each preset, we can control the reverb with two parameters
(t60(0) and t60(pi)), which would make a nice and tiny GUI.
(Maybe also add a dry/wet control.)

We also need to handle stereo, with a basic decorrelation
for example (different delays for left and right channel).

Since sverb has three internal operating modes (float, int, asm),
I think three .so would be nice. For the asm and int libraries,
we could add a third control for the bit resolution (or leave
it to a default, currently 14, but which could be set at compile
time why not).

Before going to 1.0, I need some feedback about the quality of
the various reverbs (I know that big 2 is not that good).

Also, if someone knows how to define good parameters (by hand
or algorithmically) for the delay lines, help is very welcome.

Take care of yourself,
Cedric.
Julien Claassen
2006-02-20 09:27:51 UTC
Permalink
Hi Cedric!
Can sverb work without a gui, like a simple command-line utility? I'm blind,
yet would like to use this piece of software. It sounds great!
Kindest regards
Julien

--------
Music was my first love and it will be my last (John Miles)

======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net - the Linux TextBased Studio guide
Cedric Roux
2006-02-20 14:43:39 UTC
Permalink
Hi Julien,

I'll try to work on it.
What about a pipe-approach?
I don't want to deal with the zillions of audio file formats,
so I'll take raw mono input on stdin (float or short, little or
big endian) and produce raw data (float or short) on stdout.
You could set parameters with zillions of command line arguments
(mostly: choose a preset, a t60(0), a t60(pi), a float/int/asm
choice of calculatin a bit resolution for the fixed arithmetic,
a dry/wet control, a bypass-t60 toggle, controls for input/output
formats).

Maybe it should handle stereo input and/or output?

Algorithm has been implemented with 44.1 KHz resolution but
can work with anything (of course the same set of parameters
won't give the same reverberation time).

What do you think of it?

I don't have much time for the moment, but as soon as I get
some, I'll do it. Tell me what you absolutely need, so I can
provide you with it as soon as I can. (Of course if someone
has got some brain's idle, any contribution is welcome.)

Take care,
Cédric.
Post by Julien Claassen
Hi Cedric!
Can sverb work without a gui, like a simple command-line utility? I'm blind,
yet would like to use this piece of software. It sounds great!
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net - the Linux TextBased Studio guide
Julien Claassen
2006-02-28 20:48:13 UTC
Permalink
Hi Cedric!
I think this will be perfectly nice for a start. But what about libsndfile?
I'm not such a good programmer, but I think it makes handling those formats
relatively easy. But this is just an idea to get it more comfortable. If it's
not so easy and quick after all, just go ahead.
Kindest regards
Julien

--------
Music was my first love and it will be my last (John Miles)

======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net - the Linux TextBased Studio guide
Julian Storer
2006-02-28 21:49:04 UTC
Permalink
Hi folks

A while ago there was some talk on the newsgroup about my Juce library,
and people were asking if/when I'd add support for audio under Linux..
well it's taken me a while to get round to it, but I finally battled
through the hostile, undocumented jungle of ALSA, and the latest Juce
release does finally make a noise under Linux!

Some quick background info for those of you who won't be familiar with
Juce - it's a cross-platform (Windows/Mac/Linux) GUI + everything else
library, similar to Qt, wxWindows, etc, released under the GPL. Because
of my background (I wrote Tracktion), there's a lot of audio stuff in
there, and it's got cross-platform support for DirectSound, ASIO,
CoreAudio.. and now ALSA.

So anyway, if anyone's interested in having a go, that'd be great, as so
far I've only been able to test it on my laptop's built-in soundcard!
The Juce demo app has an audio page which monitors incoming audio, plays
wavefiles and has a simple synthesizer. It lets you pick a soundcard,
change its sample rate, etc.

Hope this is of interest to people! More info here:
http://www.rawmaterialsoftware.com/juce
and downloads here:
http://www.rawmaterialsoftware.com/juce/download.php

(Oh - and before anyone asks "does this mean Tracktion is going to come
out on Linux soon", the answer is "I don't know"!)

Cheers!

Jules
Lee Revell
2006-02-28 22:02:34 UTC
Permalink
Post by Julian Storer
the hostile, undocumented jungle of ALSA
Please, stop repeating the myth that ALSA is undocumented.

http://www.alsa-project.org/documentation.php
http://www.alsa-project.org/alsa-doc/alsa-lib/
http://www.alsa-project.org/alsa-doc/alsa-lib/examples.html

Lee
Paul Davis
2006-02-28 22:30:40 UTC
Permalink
Post by Julian Storer
Hi folks
A while ago there was some talk on the newsgroup about my Juce library,
and people were asking if/when I'd add support for audio under Linux..
well it's taken me a while to get round to it, but I finally battled
through the hostile, undocumented jungle of ALSA, and the latest Juce
release does finally make a noise under Linux!
why did you decide to add ALSA support rather than JACK (which is 200%
simpler than ALSA, 150% closer to ASIO/CoreAudio's callback model and
200% more useful to 50% of users) ? or at least, why not portaudio?

;)

--p
Julian Storer
2006-02-28 22:33:45 UTC
Permalink
I could (and probably will) add Jack support later, but assumed that
ALSA's the lowest-common-denominator sound API.
Post by Paul Davis
Post by Julian Storer
Hi folks
A while ago there was some talk on the newsgroup about my Juce library,
and people were asking if/when I'd add support for audio under Linux..
well it's taken me a while to get round to it, but I finally battled
through the hostile, undocumented jungle of ALSA, and the latest Juce
release does finally make a noise under Linux!
why did you decide to add ALSA support rather than JACK (which is 200%
simpler than ALSA, 150% closer to ASIO/CoreAudio's callback model and
200% more useful to 50% of users) ? or at least, why not portaudio?
;)
--p
Lee Revell
2006-02-28 23:31:07 UTC
Permalink
Post by Julian Storer
I could (and probably will) add Jack support later, but assumed that
ALSA's the lowest-common-denominator sound API.
ALSA is a good choice for a "consumer" app like a movie or CD player.
For musician stuff, JACK has been the standard for a few years now.

Lee
Kevin Hremeviuc
2006-02-28 23:00:46 UTC
Permalink
Hi all,

<rant>
I am sorry but the alsa documentation is poor. I
always resort to looking at other peoples applications
to see what they have done. A case in point is the
alsa sequencer. It does have a nice page which
explains just enough to get you going but the api is
much larger than this and some of the function
documentation is banal to say the least.

The audio stuff is even worse!
There is some conceptual documentation which seems
very old and I am not sure whether it is up to date.
</rant>

<caveat>
For an open source project that is not used
commercially the documentation is amongst the best
that I have seen and we all know how difficult it is
to find time to spend on this stuff. I have been
trying for years but between work, study and actually
trying to make some music I have ended up making no
contribution (so who am I to talk!).
</caveat>

sorry but that's my opinion,

Kev
On Tue, 2006-02-28 at 21:49 +0000, Julian Storer
Post by Julian Storer
the hostile, undocumented jungle of ALSA
Please, stop repeating the myth that ALSA is
undocumented.
http://www.alsa-project.org/documentation.php
http://www.alsa-project.org/alsa-doc/alsa-lib/
http://www.alsa-project.org/alsa-doc/alsa-lib/examples.html
Lee
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
Kevin Hremeviuc
2006-03-01 03:56:29 UTC
Permalink
Hi Julian,

Just compiled juce and tried the demo programme. Got
the error contained in the attached screen capture. I
don't normally have any problems running alsa audio
apps, however the fault may be in my alsa setup.

Kev
Post by Julian Storer
Hi folks
A while ago there was some talk on the newsgroup
about my Juce library,
and people were asking if/when I'd add support for
audio under Linux..
well it's taken me a while to get round to it, but I
finally battled
through the hostile, undocumented jungle of ALSA,
and the latest Juce
release does finally make a noise under Linux!
Some quick background info for those of you who
won't be familiar with
Juce - it's a cross-platform (Windows/Mac/Linux) GUI
+ everything else
library, similar to Qt, wxWindows, etc, released
under the GPL. Because
of my background (I wrote Tracktion), there's a lot
of audio stuff in
there, and it's got cross-platform support for
DirectSound, ASIO,
CoreAudio.. and now ALSA.
So anyway, if anyone's interested in having a go,
that'd be great, as so
far I've only been able to test it on my laptop's
built-in soundcard!
The Juce demo app has an audio page which monitors
incoming audio, plays
wavefiles and has a simple synthesizer. It lets you
pick a soundcard,
change its sample rate, etc.
http://www.rawmaterialsoftware.com/juce
http://www.rawmaterialsoftware.com/juce/download.php
(Oh - and before anyone asks "does this mean
Tracktion is going to come
out on Linux soon", the answer is "I don't know"!)
Cheers!
Jules
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
Lee Revell
2006-03-01 04:13:53 UTC
Permalink
Post by Kevin Hremeviuc
Hi Julian,
Just compiled juce and tried the demo programme. Got
the error contained in the attached screen capture. I
don't normally have any problems running alsa audio
apps, however the fault may be in my alsa setup.
Kev
It appears that Juce is opening the "hw" PCM which only support S32_LE
on your device. It should be opening the "default" PCM which will
automagically convert whatever format Juce uses to S32_LE.

Lee
Julian Storer
2006-03-01 09:02:31 UTC
Permalink
Ok, thanks Lee - I deliberately used the hw pcm devices though, as they
gave better performance than the default one. I guess it just needs
support for some extra data formats.
Post by Lee Revell
Post by Kevin Hremeviuc
Hi Julian,
Just compiled juce and tried the demo programme. Got
the error contained in the attached screen capture. I
don't normally have any problems running alsa audio
apps, however the fault may be in my alsa setup.
Kev
It appears that Juce is opening the "hw" PCM which only support S32_LE
on your device. It should be opening the "default" PCM which will
automagically convert whatever format Juce uses to S32_LE.
Lee
James Courtier-Dutton
2006-03-01 10:42:08 UTC
Permalink
Post by Julian Storer
Ok, thanks Lee - I deliberately used the hw pcm devices though, as
they gave better performance than the default one. I guess it just
needs support for some extra data formats.
What do you mean? Better performance in what way?
The "default" will have no performance hit is the sound card supports
the same format as the application, otherwise, "default" will
automatically do sample format conversion.
In any case, the ALSA device name to use should be user configurable.

James



This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
Julian Storer
2006-03-01 11:56:17 UTC
Permalink
Am I right in assuming that by default you mean the "plughw:" devices?
On my machine the plughw: device glitched constantly, but the hw:
devices worked really well. There was also some other reason I chose
that.. can't remember offhand what it was though.
Post by James Courtier-Dutton
Post by Julian Storer
Ok, thanks Lee - I deliberately used the hw pcm devices though, as
they gave better performance than the default one. I guess it just
needs support for some extra data formats.
What do you mean? Better performance in what way?
The "default" will have no performance hit is the sound card supports
the same format as the application, otherwise, "default" will
automatically do sample format conversion.
In any case, the ALSA device name to use should be user configurable.
James
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
James Courtier-Dutton
2006-03-01 12:31:09 UTC
Permalink
Post by Julian Storer
Am I right in assuming that by default you mean the "plughw:" devices?
devices worked really well. There was also some other reason I chose
that.. can't remember offhand what it was though.
"default" means just that. use the name "default" instead of plughw:....
or hw:0,0
If the device glitched when using "plughw:" then it is a bug in ALSA or
your application.
How is you application deciding on sample rate?

There is no sensible reason to ever use the "hw:0,0" device. Always use
the "plug:front" and friends.
If an application only works with "hw:0,0" it has been written wrongly.

James

P.S. Please don't top post.



This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
cdr
2006-03-01 12:44:07 UTC
Permalink
yeah, this is good news , a windows/mac developer who has ventured into the fray. awfully rare in this day and age. to most of these people, its as if this world doesnt exist.
Post by James Courtier-Dutton
Post by Julian Storer
Am I right in assuming that by default you mean the "plughw:" devices?
devices worked really well. There was also some other reason I chose
that.. can't remember offhand what it was though.
"default" means just that. use the name "default" instead of plughw:....
or hw:0,0
If the device glitched when using "plughw:" then it is a bug in ALSA or
your application.
How is you application deciding on sample rate?
There is no sensible reason to ever use the "hw:0,0" device. Always use
the "plug:front" and friends.
If an application only works with "hw:0,0" it has been written wrongly.
James
P.S. Please don't top post.
what does this mean. put the reply below the quoted content? in most readers i can think of off the top of myy head, the top is the best place to put a reply,. since it is more likely to show up in a brief synopsis, ,and not require paging down.. but then most of the world likes to prewrap their messages which is annoyingly narrow on a big screen or suffers from linebreak-hell on a cellphone, , so i guess i'll remain on the fringes of acceptability
Post by James Courtier-Dutton
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
Julian Storer
2006-03-01 12:51:46 UTC
Permalink
Post by James Courtier-Dutton
Post by Julian Storer
Am I right in assuming that by default you mean the "plughw:"
devices? On my machine the plughw: device glitched constantly, but
the hw: devices worked really well. There was also some other reason
I chose that.. can't remember offhand what it was though.
"default" means just that. use the name "default" instead of
plughw:.... or hw:0,0
If the device glitched when using "plughw:" then it is a bug in ALSA
or your application.
How is you application deciding on sample rate?
There is no sensible reason to ever use the "hw:0,0" device. Always
use the "plug:front" and friends.
If an application only works with "hw:0,0" it has been written wrongly.
Ok, at the risk of "spreading the myth" that the ALSA documentation is
bad... is this stuff actually explained anywhere?? It took me a day of
googling just to find out what the two numbers after "hw" meant! I never
saw anything mention "default" or "plug:front", etc.

Not sure if it'd be appropriate anyway, though, as my API exposes a list
of drivers and lets the user choose which one to use, and the sample
rate, rather than just using the default driver.
Post by James Courtier-Dutton
James
P.S. Please don't top post.
Lee Revell
2006-03-01 17:26:00 UTC
Permalink
Post by Julian Storer
Post by James Courtier-Dutton
Post by Julian Storer
Am I right in assuming that by default you mean the "plughw:"
devices? On my machine the plughw: device glitched constantly, but
the hw: devices worked really well. There was also some other reason
I chose that.. can't remember offhand what it was though.
"default" means just that. use the name "default" instead of
plughw:.... or hw:0,0
If the device glitched when using "plughw:" then it is a bug in ALSA
or your application.
How is you application deciding on sample rate?
There is no sensible reason to ever use the "hw:0,0" device. Always
use the "plug:front" and friends.
If an application only works with "hw:0,0" it has been written wrongly.
Ok, at the risk of "spreading the myth" that the ALSA documentation is
bad... is this stuff actually explained anywhere?? It took me a day of
googling just to find out what the two numbers after "hw" meant! I never
saw anything mention "default" or "plug:front", etc.
Not sure if it'd be appropriate anyway, though, as my API exposes a list
of drivers and lets the user choose which one to use, and the sample
rate, rather than just using the default driver.
Yes, see the links to the ALSA documentation I posted in my last
message. You also could have searched the mailing list archives.

Lee
Lee Revell
2006-03-01 17:59:12 UTC
Permalink
Post by Julian Storer
Post by James Courtier-Dutton
Post by Julian Storer
Am I right in assuming that by default you mean the "plughw:"
devices? On my machine the plughw: device glitched constantly, but
the hw: devices worked really well. There was also some other reason
I chose that.. can't remember offhand what it was though.
"default" means just that. use the name "default" instead of
plughw:.... or hw:0,0
If the device glitched when using "plughw:" then it is a bug in ALSA
or your application.
How is you application deciding on sample rate?
There is no sensible reason to ever use the "hw:0,0" device. Always
use the "plug:front" and friends.
If an application only works with "hw:0,0" it has been written wrongly.
Ok, at the risk of "spreading the myth" that the ALSA documentation is
bad... is this stuff actually explained anywhere?? It took me a day of
googling just to find out what the two numbers after "hw" meant! I never
saw anything mention "default" or "plug:front", etc.
Not sure if it'd be appropriate anyway, though, as my API exposes a list
of drivers and lets the user choose which one to use, and the sample
rate, rather than just using the default driver.
Here is some information:

http://www.sabi.co.uk/Notes/linuxSoundALSA.html

You are right that there's not a lot of "user-level" documentation for
ALSA, because it's not really intended to be used by end users - ALSA
provides a complete HAL which more user friendly APIs like JACK are
built on top of. You would not write a complex app in straight Xlib...

The standard approach for open source development is to ask about it on
one of the many mailing lists or IRC channels, rather than forging ahead
on your own with only Google as your guide. Due to the proliferation of
Wikis by less informed users, most of the information that Google
returns is half-wrong.

Lee
Julian Storer
2006-03-01 18:42:37 UTC
Permalink
Post by Lee Revell
Post by Julian Storer
Post by James Courtier-Dutton
Post by Julian Storer
Am I right in assuming that by default you mean the "plughw:"
devices? On my machine the plughw: device glitched constantly, but
the hw: devices worked really well. There was also some other reason
I chose that.. can't remember offhand what it was though.
"default" means just that. use the name "default" instead of
plughw:.... or hw:0,0
If the device glitched when using "plughw:" then it is a bug in ALSA
or your application.
How is you application deciding on sample rate?
There is no sensible reason to ever use the "hw:0,0" device. Always
use the "plug:front" and friends.
If an application only works with "hw:0,0" it has been written wrongly.
Ok, at the risk of "spreading the myth" that the ALSA documentation is
bad... is this stuff actually explained anywhere?? It took me a day of
googling just to find out what the two numbers after "hw" meant! I never
saw anything mention "default" or "plug:front", etc.
Not sure if it'd be appropriate anyway, though, as my API exposes a list
of drivers and lets the user choose which one to use, and the sample
rate, rather than just using the default driver.
http://www.sabi.co.uk/Notes/linuxSoundALSA.html
You are right that there's not a lot of "user-level" documentation for
ALSA, because it's not really intended to be used by end users - ALSA
provides a complete HAL which more user friendly APIs like JACK are
built on top of. You would not write a complex app in straight Xlib...
The standard approach for open source development is to ask about it on
one of the many mailing lists or IRC channels, rather than forging ahead
on your own with only Google as your guide. Due to the proliferation of
Wikis by less informed users, most of the information that Google
returns is half-wrong.
Lee
Thanks for the link - I'd not seen that page before. I'll have another
look at all this asap.
Lee Revell
2006-03-01 18:53:26 UTC
Permalink
Post by Julian Storer
Thanks for the link - I'd not seen that page before. I'll have another
look at all this asap.
Really, your time would be better spent getting to know JACK, or even
PortAudio, than trying to get your brain around ALSA. But feel free to
ask here or on alsa-devel or alsa-user at lists.sourceforge.net if you
have any ALSA questions...

Lee
Lee Revell
2006-03-01 17:23:44 UTC
Permalink
Post by Julian Storer
Ok, thanks Lee - I deliberately used the hw pcm devices though, as they
gave better performance than the default one. I guess it just needs
support for some extra data formats.
This is broken - you'd have to embed knowledge of every single format
that ALSA supports in your app, or else your app will only support
certain soundcards!

Please, at least make this configurable.

Lee
Sampo Savolainen
2006-03-01 08:22:26 UTC
Permalink
Post by Julian Storer
Hi folks
A while ago there was some talk on the newsgroup about my Juce library,
and people were asking if/when I'd add support for audio under Linux..
well it's taken me a while to get round to it, but I finally battled
through the hostile, undocumented jungle of ALSA, and the latest Juce
release does finally make a noise under Linux!
This is truly great news. I really appreciate the effort you have put into
the port, and to supporting Linux.

Alas, ALSA. It's very unfortunate that you chose ALSA as the API for Linux.
For me, applications without jack support are of zero interest.

I live in a world where i can just connect every soft synth and drum machine
and midi sequencer via jack to ardour, sync them via jack transport, do
mastering inside this environment with Jamin.

I'm capable of exporting & mastering faster-than-realtime, because all the
software inside the "jack loop" will all run in perfect sync, in-time &
on-time with jack instead of the soundcard.

These are the benefits from jack. This is why I will give almost no
consideration for an application which isn't jackd compatible.

The worst thing that can happen is that you get disencouraged. I really do
hope you won't get. Like Paul said, the jackd api is super-simple, you
should have no trouble at all making a driver for JUCE which uses jackd.

Please, pretty please, with sugar, whip cream and a coctail cherry on top.
Don't give up! :)

(And i guess you've already worked through the GUI & input issues of porting
JUCE to Linux, so one more driver should be a breeze)

Sampo
Dave Robillard
2006-03-01 21:29:48 UTC
Permalink
Post by Sampo Savolainen
Alas, ALSA. It's very unfortunate that you chose ALSA as the API for Linux.
For me, applications without jack support are of zero interest.
++

-DR-
Lee Revell
2006-03-01 21:37:16 UTC
Permalink
Post by Sampo Savolainen
Alas, ALSA. It's very unfortunate that you chose ALSA as the API for Linux.
For me, applications without jack support are of zero interest.
++
Don't be too discouraging! If it already supported callback based APIs
like ASIO (or ALSA, optionally) it might be very easy to port.

Lee
Pedro Lopez-Cabanillas
2006-03-01 18:45:13 UTC
Permalink
Post by Julian Storer
Post by James Courtier-Dutton
"default" means just that. use the name "default" instead of
plughw:.... or hw:0,0
[...]
Post by Julian Storer
Ok, at the risk of "spreading the myth" that the ALSA documentation is
bad... is this stuff actually explained anywhere?? It took me a day of
googling just to find out what the two numbers after "hw" meant! I never
saw anything mention "default" or "plug:front", etc.
http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html#pcm_dev_names

Regards,
Pedro
Lee Revell
2006-03-01 19:03:53 UTC
Permalink
Post by Pedro Lopez-Cabanillas
Post by Julian Storer
Post by James Courtier-Dutton
"default" means just that. use the name "default" instead of
plughw:.... or hw:0,0
[...]
Post by Julian Storer
Ok, at the risk of "spreading the myth" that the ALSA documentation is
bad... is this stuff actually explained anywhere?? It took me a day of
googling just to find out what the two numbers after "hw" meant! I never
saw anything mention "default" or "plug:front", etc.
http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html#pcm_dev_names
Actually I think Julian has a point - I can't find the doc anywhere that
says you output to the front speakers by opening the "front" device and
rear the "rear" device, that apps should open the "default" PCM by
default, that "hw:x" should only be used for special cases where direct
hardware access is requires (like JACK), etc. We seem to just assume
that people will ask on the mailing list, or look at how another app
does it.

All the docs I can find are targeted at someone who already knows how
ALSA works but needs more detail, so most people end up thinking it's
WAY more complicated just to get ALSA to produce sound than it is.
There's plenty of docs at the advanced developer level but not much
below that.

Just as it would be pointless to improve the Xlib docs now that GTK+ and
QT are available, I don't see this being fixed anytime soon, because we
should be steering people towards higher level APIs anyway, and these
are quite solid.

Maybe we just need an ALSA mini HOWTO.

Lee
Chris Cannam
2006-03-01 19:57:15 UTC
Permalink
Post by Lee Revell
Actually I think Julian has a point - I can't find the doc anywhere
that says you output to the front speakers by opening the "front"
device and rear the "rear" device, that apps should open the
"default" PCM by default, that "hw:x" should only be used for special
cases where direct hardware access is requires (like JACK), etc.
Only half a dozen people in the world know these things. And here you
are leaking them onto a public mailing list!


Chris
Jay Vaughan
2006-03-01 20:09:42 UTC
Permalink
Post by Chris Cannam
Only half a dozen people in the world know these things. And here you
are leaking them onto a public mailing list!
its in the code, duh .. what more grok do you need?
--
;

Jay Vaughan
Lee Revell
2006-03-01 20:15:35 UTC
Permalink
Post by Chris Cannam
Post by Lee Revell
Actually I think Julian has a point - I can't find the doc anywhere
that says you output to the front speakers by opening the "front"
device and rear the "rear" device, that apps should open the
"default" PCM by default, that "hw:x" should only be used for special
cases where direct hardware access is requires (like JACK), etc.
Only half a dozen people in the world know these things. And here you
are leaking them onto a public mailing list!
aplay -L

...
cards 'cards.pcm'
front 'cards.pcm.front'
rear 'cards.pcm.rear'
center_lfe 'cards.pcm.center_lfe'
side 'cards.pcm.side'
surround40 'cards.pcm.surround40'
surround41 'cards.pcm.surround41'
surround50 'cards.pcm.surround50'
surround51 'cards.pcm.surround51'
surround71 'cards.pcm.surround71'
iec958 'cards.pcm.iec958'
spdif 'cards.pcm.iec958'
modem 'cards.pcm.modem'
phoneline 'cards.pcm.phoneline'
default 'cards.pcm.default'
dmix 'cards.pcm.dmix'
dsnoop 'cards.pcm.dsnoop'

Lee
Lee Revell
2006-03-01 20:26:00 UTC
Permalink
Post by Lee Revell
aplay -L
...
cards 'cards.pcm'
front 'cards.pcm.front'
rear 'cards.pcm.rear'
center_lfe 'cards.pcm.center_lfe'
side 'cards.pcm.side'
surround40 'cards.pcm.surround40'
surround41 'cards.pcm.surround41'
surround50 'cards.pcm.surround50'
surround51 'cards.pcm.surround51'
surround71 'cards.pcm.surround71'
iec958 'cards.pcm.iec958'
spdif 'cards.pcm.iec958'
modem 'cards.pcm.modem'
phoneline 'cards.pcm.phoneline'
default 'cards.pcm.default'
dmix 'cards.pcm.dmix'
dsnoop 'cards.pcm.dsnoop'
Seriously, I do think a valid point has been raised (though the problem
is not nearly as bad as "ALSA is an undocumented maze").

Besides my previous statement about use of front, rear, default PCMs,
what other details might an ALSA mini HOWTO for end users cover?

Please don't mention .asoundrc (should be completely invisible to the
end user if everything is working right) or alsa-lib API (developer
stuff is already well documented).

Lee
Christoph Eckert
2006-03-01 21:17:23 UTC
Permalink
Post by Lee Revell
Besides my previous statement about use of front, rear, default PCMs,
what other details might an ALSA mini HOWTO for end users cover?
Freely adopted from my talk about linux audio usability at LAC 2005:

»Users dislike reading documentation and hackers dislike writing
documentation. So a reasonable thing seems to be to reduce the amount
of documentation needed.«

See my other post.


Best regards


ce
David Kastrup
2006-03-01 21:32:55 UTC
Permalink
Post by Lee Revell
Post by Chris Cannam
Post by Lee Revell
Actually I think Julian has a point - I can't find the doc anywhere
that says you output to the front speakers by opening the "front"
device and rear the "rear" device, that apps should open the
"default" PCM by default, that "hw:x" should only be used for special
cases where direct hardware access is requires (like JACK), etc.
Only half a dozen people in the world know these things. And here you
are leaking them onto a public mailing list!
aplay -L
...
cards 'cards.pcm'
front 'cards.pcm.front'
rear 'cards.pcm.rear'
center_lfe 'cards.pcm.center_lfe'
side 'cards.pcm.side'
surround40 'cards.pcm.surround40'
surround41 'cards.pcm.surround41'
surround50 'cards.pcm.surround50'
surround51 'cards.pcm.surround51'
surround71 'cards.pcm.surround71'
iec958 'cards.pcm.iec958'
spdif 'cards.pcm.iec958'
modem 'cards.pcm.modem'
phoneline 'cards.pcm.phoneline'
default 'cards.pcm.default'
dmix 'cards.pcm.dmix'
dsnoop 'cards.pcm.dsnoop'
It is actually more like a hoax. To wit: I have a builtin sound card
into my laptop without any spdif/iec958 folderol. Now I plug in an
IEC958 adapter into USB (which counts as a soundcard of its own).

What happens now if I do
aplay -D spdif something.wav
? Most certainly not the soundcard with the S/PDIF output gets used.
Instead some nonsense happens.

I have found no way whatsoever to get this card to output anything
except by using Ubuntu's Sound preference setting to make the
ridiculous S/PDIF only sound gadget the default sound device.

Only then will I ever get aplay (which in contrast to alsamixer does
not have something like a -c 1 option to specify card 1) to use the
SPDIF gadget. But I don't want to use the gadget for all of my
output.

And I have been unable to find any man page or info or web page
accessible by Google that would tell me how to do this.

It is rather silly.
--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Lee Revell
2006-03-01 20:50:06 UTC
Permalink
Post by David Kastrup
What happens now if I do
aplay -D spdif something.wav
? Most certainly not the soundcard with the S/PDIF output gets used.
Instead some nonsense happens.
It will try to play to the spdif interface on card 0 (the onboard one)
which will fail.

aplay -D spdif:1 something.wav should DTRT.

I realize this is not the most user friendly scheme (but it's certainly
a step forward from "/dev/dspX") - please describe in detail what you
think a user friendly interface would look like.

Lee
Christoph Eckert
2006-03-01 21:48:06 UTC
Permalink
Post by Lee Revell
please describe in detail what you
think a user friendly interface would look like.
There are several issues at different points:

* alsaconf has done a really great job in the past, but meanwhile I see
the need for a replacement which can handle more than one card, support
USB (and in the future firewire) devices, can read an existing
configuration, set a certain device as the default device and so on.
Based on such a new script, we then could build even GUI configuration
frontends (Qt, Gtk, KDE, you name it)

* ALSA aware applications need to offer the possibility to choose the
card to use. Many users have more than one device these days, a cheap
AC '97 one and a cool 7.1 USB device. When playing a video file, there
should be a possibility to tell the player which device to use. On tour
with my notebook I prefer to output audio of a DVD mixed as stereo on
my internal speakers while at home or at a friend's I'll better use my
USB device

* Applications should remember the last used device (not to mention
soundservers here) so the user gets a straightforward experience ("I
once set it up and since then it simply works(TM)")

* Users need a simple GUI to set the default (=most often used) device,
maybe renaming the cards, configuring special options easily (asoundrc
resp. special options in modules.conf like card ordering)

* ALSA runs a driver based on the chipset found. Unfortunately, and I
mainly have those AC '97 chips in mind, these have a huge set of
features a user never needs or even worse which are not available on
the chassis. But ALSA cannot know that because it only knows the chip.
Furthermore ALSA builds a generic mixer interface for the chip. That's
a really cool piece of software, but have you ever tried on a notebook
with an AC '97 chipset to set up your card for VOIP properly? An
average user will be lost. Therefore I had the idea to put an
additional layer between ALSA and the mixer interface so users can
contribute mixer descriptions for wide spread cards. If such an
desription doesn't exist, ALSA could still fall back to the generic
mixer interface


Just my two Cents.


Best regards


ce
Lee Revell
2006-03-01 22:04:38 UTC
Permalink
Post by Christoph Eckert
Post by Lee Revell
please describe in detail what you
think a user friendly interface would look like.
* alsaconf has done a really great job in the past, but meanwhile I see
the need for a replacement which can handle more than one card, support
USB (and in the future firewire) devices, can read an existing
configuration, set a certain device as the default device and so on.
Based on such a new script, we then could build even GUI configuration
frontends (Qt, Gtk, KDE, you name it)
Much of this is already done - Gnome provides both a GUI and CLI
interface to set the default soundcard, System->Preferences->Sound.
It's unfortunate that many apps don't have a way to configure the sound
device they use.

Lee
Christoph Eckert
2006-03-01 22:13:49 UTC
Permalink
Post by Lee Revell
Much of this is already done - Gnome provides both a GUI and CLI
interface to set the default soundcard, System->Preferences->Sound.
is there a backend script available, distro independant? I'd like to
look at it simply for my personal interest. Does it have a name?
Post by Lee Revell
It's unfortunate that many apps don't have a way to configure the
sound device they use.
Indeed it is. OTOH xmms is a very positive example.


Best regards


ce
Andy Wingo
2006-03-02 09:58:15 UTC
Permalink
Hi,
Post by Lee Revell
Post by Christoph Eckert
* alsaconf has done a really great job in the past, but meanwhile I see
the need for a replacement which can handle more than one card, support
USB (and in the future firewire) devices, can read an existing
configuration, set a certain device as the default device and so on.
Based on such a new script, we then could build even GUI configuration
frontends (Qt, Gtk, KDE, you name it)
Much of this is already done - Gnome provides both a GUI and CLI
interface to set the default soundcard, System->Preferences->Sound.
(That still mentions ESD, but we all have our closet-skeletons eh.)

I wanted to write in to mention this bug report,
http://bugzilla.gnome.org/show_bug.cgi?id=329106:

It's currently not possible to specify one specific ALSA device
in a pipeline such that the same pipeline is guaranteed to work
across machine reboots. The problem is that ALSA card numbers
are generated on each system startup. The result depends on the
random order of which kernel module gets loaded first. When
hotplugging USB audio devices, the card number changes may even
happen without machine reboots.

I propose to use HAL's UDI (Unique Device Id) as persistent ALSA
sound device identifier. A patch to make HAL's UDI independent of
ALSA's card number has already been committed to HAL CVS HEAD.

So, using HAL is a way to reliably choose a particular harware device;
arguably ALSA should handle this case itself though.

Regards,
--
Andy Wingo
http://wingolog.org/
Christoph Eckert
2006-03-03 19:09:29 UTC
Permalink
        It's currently not possible to specify one specific ALSA
device in a pipeline such that the same pipeline is guaranteed to
work across machine reboots. The problem is that ALSA card numbers
are generated on each system startup. The result depends on the
random order of which kernel module gets loaded first. When
hotplugging USB audio devices, the card number changes may even
happen without machine reboots.
what about indexing the cards in modules.conf? Works great for me even
when hotplugging USB devices (and I use 5 of them ;-)


Best regards


ce
Lee Revell
2006-03-03 19:19:13 UTC
Permalink
Post by Christoph Eckert
Post by Andy Wingo
It's currently not possible to specify one specific ALSA
device in a pipeline such that the same pipeline is guaranteed to
work across machine reboots. The problem is that ALSA card numbers
are generated on each system startup. The result depends on the
random order of which kernel module gets loaded first. When
hotplugging USB audio devices, the card number changes may even
happen without machine reboots.
what about indexing the cards in modules.conf? Works great for me even
when hotplugging USB devices (and I use 5 of them ;-)
And normal users should never have to touch that file. In Gnome, just
do System->Preferences->Sound and select the "Default sound card".

Lee
Christoph Eckert
2006-03-03 20:05:56 UTC
Permalink
Post by Lee Revell
Post by Christoph Eckert
what about indexing the cards in modules.conf? Works great for me
even when hotplugging USB devices (and I use 5 of them ;-)
And normal users should never have to touch that file.
true.
Post by Lee Revell
In Gnome,
just do System->Preferences->Sound and select the "Default sound
card".
What does it do technically spoken? Creates a user asoundrc with a
default device?


Best regards


ce

David Kastrup
2006-03-01 23:09:37 UTC
Permalink
Post by Lee Revell
Post by David Kastrup
What happens now if I do
aplay -D spdif something.wav
? Most certainly not the soundcard with the S/PDIF output gets used.
Instead some nonsense happens.
It will try to play to the spdif interface on card 0 (the onboard one)
which will fail.
aplay -D spdif:1 something.wav should DTRT.
aplay -D spdif:1 -f cdr /tmp/mnt/wo1.dat
ALSA lib confmisc.c:990:(snd_func_refer) Unable to find definition 'cards.USB-Audio.pcm.iec958.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:3479:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:3948:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2090:(snd_pcm_open_noupdate) Unknown PCM spdif:1
aplay: main:533: audio open error: No such file or directory
--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Paul Davis
2006-03-01 22:18:20 UTC
Permalink
Post by David Kastrup
Post by Lee Revell
Post by David Kastrup
What happens now if I do
aplay -D spdif something.wav
? Most certainly not the soundcard with the S/PDIF output gets used.
Instead some nonsense happens.
It will try to play to the spdif interface on card 0 (the onboard one)
which will fail.
aplay -D spdif:1 something.wav should DTRT.
aplay -D spdif:1 -f cdr /tmp/mnt/wo1.dat
ALSA lib confmisc.c:990:(snd_func_refer) Unable to find definition 'cards.USB-Audio.pcm.iec958.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:3479:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:3948:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2090:(snd_pcm_open_noupdate) Unknown PCM spdif:1
aplay: main:533: audio open error: No such file or directory
now that's what i call a sick joke.

if only i had more time, i'd be writing CoreAudio for linux right this
very second.

--p
Lee Revell
2006-03-01 22:28:28 UTC
Permalink
Post by Paul Davis
now that's what i call a sick joke.
if only i had more time, i'd be writing CoreAudio for linux right this
very second.
Which would magically make 5 zillion different sound devices on the
market that you don't have the specs to or even a hardware sample Just
Work?

Lee
Paul Davis
2006-03-02 02:40:11 UTC
Permalink
Post by Lee Revell
Post by Paul Davis
now that's what i call a sick joke.
if only i had more time, i'd be writing CoreAudio for linux right this
very second.
Which would magically make 5 zillion different sound devices on the
market that you don't have the specs to or even a hardware sample Just
Work?
no, it would provide names like

MOTU 828 mkII channel 1+2
RME HDSP (#1)
Builtin Audio

to the user.

it would also fix a myriad of other problems in ALSA, such as its
reliance on interrupts that occur at regular sample-based intervals,
its presentation of a multiplicity of programming models, and its
lack of reasonable way to present itself to ordinary users.

--p
James Courtier-Dutton
2006-03-02 12:57:11 UTC
Permalink
Post by Paul Davis
Post by Lee Revell
Post by Paul Davis
now that's what i call a sick joke.
if only i had more time, i'd be writing CoreAudio for linux right this
very second.
Which would magically make 5 zillion different sound devices on the
market that you don't have the specs to or even a hardware sample Just
Work?
no, it would provide names like
MOTU 828 mkII channel 1+2
RME HDSP (#1)
Builtin Audio
to the user.
it would also fix a myriad of other problems in ALSA, such as its
reliance on interrupts that occur at regular sample-based intervals,
Can you suggest alternatives?
Post by Paul Davis
its presentation of a multiplicity of programming models, and its
There is no one-size-fits-all with sound programming models.
Post by Paul Davis
lack of reasonable way to present itself to ordinary users.
--p
What is wrong with the current presentation?
You currently get the name of the card.



This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
Paul Davis
2006-03-02 14:17:34 UTC
Permalink
Post by James Courtier-Dutton
Post by Paul Davis
no, it would provide names like
MOTU 828 mkII channel 1+2
RME HDSP (#1)
Builtin Audio
to the user.
it would also fix a myriad of other problems in ALSA, such as its
reliance on interrupts that occur at regular sample-based intervals,
Can you suggest alternatives?
i don't need to - they are fully documented by Apple in its description
of the HAL for audio devices. Rather than rely on the interrupts as
absolute indicators of time, you use them to feed a DLL. Then you use
the DLL in conjunction with a monotonic clock source (e.g. a cycle timer
or a reliable equivalent on certain AMD systems), and you can estimate
position in the h/w buffers to way better than single sample accuracy at
all times. more importantly, you can do this no matter what the basis
for the interrupt frequency is, so you get a single HAL model that works
equally well for PCI, USB and ieee1394 devices.
Post by James Courtier-Dutton
Post by Paul Davis
its presentation of a multiplicity of programming models, and its
There is no one-size-fits-all with sound programming models.
Apple don't agree with you, and neither do Steinberg or Microsoft (the
modern, reformed post-MME Microsoft, anyway). They each offer a single
programming model at the HAL level, and remarkably, its the same
programming model in every case. I don't see forums for those platforms
complaining that its a problem. Only unix programmers who go about
insisting that "everything should be a file, all i/o should be
open/read/write/close/ioctl" seem to have a problem with it, yet
curiously have no problem with the fact that you don't do video in that
way at all.
Post by James Courtier-Dutton
Post by Paul Davis
lack of reasonable way to present itself to ordinary users.
--p
What is wrong with the current presentation?
You currently get the name of the card.
i meant more generally. ALSA is so full of configuration options that
will be used by almost no-one that its incredibly confusing for almost
everyone. i wrote years ago on alsa-devel about the paper from the guy
at SGI who was involved in their video API design and ended up being a
little disappointed. his reason? they went to so effort to handle all
the corner cases that the core use case ("dump pixels into this part of
the framebuffer") was remarkably complex to do. ALSA strikes me as much
the same way, at every level, from the kernel API, to libasound, to user
space utilities.

one could argue, as Lee has done, that people (programmers, users)
should be using higher level APIs and leave the complexity behind, but
somebody or something has to deal with it at some point in order to get
sound in or out of the machine.

--p
Jussi Laako
2006-03-02 18:27:35 UTC
Permalink
Post by Paul Davis
the framebuffer") was remarkably complex to do. ALSA strikes me as much
the same way, at every level, from the kernel API, to libasound, to user
space utilities.
one could argue, as Lee has done, that people (programmers, users)
should be using higher level APIs and leave the complexity behind, but
somebody or something has to deal with it at some point in order to get
sound in or out of the machine.
I somehow find this a bit funny. OSS has been dealing with these things
at driver level and hiding the complexity pretty well. And it also works
for pro cards like my Delta1010. First it was argued that there wasn't
enough control and ALSA was better. Now ALSA has taken this to the other
extreme and now we are arguing if it's too complex.

Truth is probably somewhere between, as usual...
--
Jussi Laako <***@pp.inet.fi>
David Kastrup
2006-03-02 19:02:23 UTC
Permalink
Post by Jussi Laako
I somehow find this a bit funny. OSS has been dealing with these
things at driver level and hiding the complexity pretty well. And it
also works for pro cards like my Delta1010. First it was argued that
there wasn't enough control and ALSA was better. Now ALSA has taken
this to the other extreme and now we are arguing if it's too
complex.
No, that's not what we are arguing, at least how I understood it. The
topic was not complexity but accessibility. If it is impossible to
find explanations, stuff is hard to do. A well-documented crummy
interface can be easier to work with than an under-documented
well-designed one.
--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Richard Spindler
2006-03-02 23:26:49 UTC
Permalink
Post by David Kastrup
No, that's not what we are arguing, at least how I understood it. The
topic was not complexity but accessibility. If it is impossible to
find explanations, stuff is hard to do. A well-documented crummy
interface can be easier to work with than an under-documented
well-designed one.
I am not that sure about this one. To understand a well designed
simple API a glimpse at the headers ought to be enough documentation.

-Richard
David Kastrup
2006-03-02 23:30:52 UTC
Permalink
Post by Richard Spindler
Post by David Kastrup
No, that's not what we are arguing, at least how I understood it. The
topic was not complexity but accessibility. If it is impossible to
find explanations, stuff is hard to do. A well-documented crummy
interface can be easier to work with than an under-documented
well-designed one.
I am not that sure about this one. To understand a well designed
simple API a glimpse at the headers ought to be enough
documentation.
Few people would glimpse at header files in order to figure out the
command line arguments to "aplay".
--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Richard Spindler
2006-03-03 09:05:10 UTC
Permalink
Post by David Kastrup
Post by Richard Spindler
I am not that sure about this one. To understand a well designed
simple API a glimpse at the headers ought to be enough
documentation.
Few people would glimpse at header files in order to figure out the
command line arguments to "aplay".
Well, I referred to programming interfaces, and not to "user" interfaces.

-Richard
Lee Revell
2006-03-03 01:38:45 UTC
Permalink
Post by Paul Davis
its presentation of a multiplicity of programming models
How would you solve this? Make people who insist on a read()/write()
interface go through the OSS emulation layer? Would you remove
everything but the mmap() interface? The callback interface?

Lee
David Olofson
2006-03-03 07:16:46 UTC
Permalink
Post by Lee Revell
Post by Paul Davis
its presentation of a multiplicity of programming models
How would you solve this? Make people who insist on a
read()/write()
interface go through the OSS emulation layer? Would you remove
everything but the mmap() interface? The callback interface?
Well, there is a reason why all "serious" audio APIs use the callback
model. It's the only model that really works for low latency audio.
If you want to do serious real time audio, you'll have to get your
head around this model anyway. (You need to keep the CPU load steady,
and generating N samples every cycle rather than larger blocks "every
now and then" is the first step.) If you're not doing real time
stuff, you can wrap any API with whatever type of API you like, as
the buffering that is sometimes required, won't matter.

Indeed, there are some "latency reduction" tricks you can play with an
mmap() interface, but those are inefficient, messy hacks that you are
(or rather, were) forced to use to get usable latency in games on
some platforms. Not really usable for musical applications, and the
method doesn't fit well into engines with master effects and the
like. Callbacks on a proper OS are simpler, more reliable and more
efficient.

A read()/write() interface is basically just a (more or less) buffered
wrapper over a real interface. I frankly don't understand why one
would want to use something like it for anything remotely resembling
real time audio I/O. All it does is confuse matters WRT how much
buffering you actually have, especially when you're doing full duplex
audio. It may be handy in non real time applications (sometimes it's
easier to let the algorithm control the block sizes), but as those
aren't sensitive to additional buffering, you can just wrap the real
API with whatever you like. No reason to break or complicate the real
API to support that.


//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
'-- http://www.reologica.se - Rheology instrumentation --'
Dave Robillard
2006-03-02 04:38:09 UTC
Permalink
Post by Paul Davis
Post by David Kastrup
Post by Lee Revell
Post by David Kastrup
What happens now if I do
aplay -D spdif something.wav
? Most certainly not the soundcard with the S/PDIF output gets used.
Instead some nonsense happens.
It will try to play to the spdif interface on card 0 (the onboard one)
which will fail.
aplay -D spdif:1 something.wav should DTRT.
aplay -D spdif:1 -f cdr /tmp/mnt/wo1.dat
ALSA lib confmisc.c:990:(snd_func_refer) Unable to find definition 'cards.USB-Audio.pcm.iec958.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:3479:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:3948:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2090:(snd_pcm_open_noupdate) Unknown PCM spdif:1
aplay: main:533: audio open error: No such file or directory
now that's what i call a sick joke.
if only i had more time, i'd be writing CoreAudio for linux right this
very second.
1) Eliminate alsa-lib
2) Put jackd in kernel
3) Profit!

-DR-
James Courtier-Dutton
2006-03-02 12:49:52 UTC
Permalink
Post by Paul Davis
Post by David Kastrup
aplay -D spdif:1 -f cdr /tmp/mnt/wo1.dat
ALSA lib confmisc.c:990:(snd_func_refer) Unable to find definition 'cards.USB-Audio.pcm.iec958.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:3479:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:3948:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2090:(snd_pcm_open_noupdate) Unknown PCM spdif:1
aplay: main:533: audio open error: No such file or directory
now that's what i call a sick joke.
if only i had more time, i'd be writing CoreAudio for linux right this
very second.
--p
I don't think spdif works properly on any USB devices currently.



This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
Lee Revell
2006-03-01 22:21:15 UTC
Permalink
Post by David Kastrup
Post by Lee Revell
Post by David Kastrup
What happens now if I do
aplay -D spdif something.wav
? Most certainly not the soundcard with the S/PDIF output gets used.
Instead some nonsense happens.
It will try to play to the spdif interface on card 0 (the onboard one)
which will fail.
aplay -D spdif:1 something.wav should DTRT.
aplay -D spdif:1 -f cdr /tmp/mnt/wo1.dat
ALSA lib confmisc.c:990:(snd_func_refer) Unable to find definition 'cards.USB-Audio.pcm.iec958.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:3479:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:3948:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2090:(snd_pcm_open_noupdate) Unknown PCM spdif:1
aplay: main:533: audio open error: No such file or directory
I think this is just a bug. Can you repost this report to alsa-user at
lists.sourceforge.net or report it in the ALSA bug tracker?

https://bugtrack.alsa-project.org/alsa-bug/login_select_proj_page.php?ref=bug_report_advanced_page.php

Lee
David Kastrup
2006-03-01 23:25:28 UTC
Permalink
Post by Lee Revell
Post by David Kastrup
Post by Lee Revell
Post by David Kastrup
What happens now if I do
aplay -D spdif something.wav
? Most certainly not the soundcard with the S/PDIF output gets used.
Instead some nonsense happens.
It will try to play to the spdif interface on card 0 (the onboard one)
which will fail.
aplay -D spdif:1 something.wav should DTRT.
aplay -D spdif:1 -f cdr /tmp/mnt/wo1.dat
ALSA lib confmisc.c:990:(snd_func_refer) Unable to find definition 'cards.USB-Audio.pcm.iec958.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib conf.c:3479:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:3948:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2090:(snd_pcm_open_noupdate) Unknown PCM spdif:1
aplay: main:533: audio open error: No such file or directory
I think this is just a bug. Can you repost this report to alsa-user
at lists.sourceforge.net or report it in the ALSA bug tracker?
I can't report something as a bug as long as I have no clue whatsoever
what the appropriate syntax and expected behavior could be.

There certainly is nothing in the documentation saying that
"-D spdif:1" could be expected to do anything sensible.
--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Lee Revell
2006-03-01 22:35:55 UTC
Permalink
Post by David Kastrup
I can't report something as a bug as long as I have no clue whatsoever
what the appropriate syntax and expected behavior could be.
There certainly is nothing in the documentation saying that
"-D spdif:1" could be expected to do anything sensible.
What about -D hw:1 or -D plughw:1? USB audio devices are weird, it's
much harder to determine sane defaults than for PCI devices.

Lee
David Kastrup
2006-03-02 00:07:55 UTC
Permalink
Post by Lee Revell
Post by David Kastrup
I can't report something as a bug as long as I have no clue whatsoever
what the appropriate syntax and expected behavior could be.
There certainly is nothing in the documentation saying that
"-D spdif:1" could be expected to do anything sensible.
What about -D hw:1 or -D plughw:1? USB audio devices are weird, it's
much harder to determine sane defaults than for PCI devices.
$ aplay -D hw:1 -f cdr /tmp/mnt/wo1.dat
Playing raw data '/tmp/mnt/wo1.dat' : Signed 16 bit Big Endian, Rate 44100 Hz, Stereo
aplay: set_params:882: Sample format non available
$ aplay -D plughw:1 -f cdr /tmp/mnt/wo1.dat
Playing raw data '/tmp/mnt/wo1.dat' : Signed 16 bit Big Endian, Rate 44100 Hz, Stereo

Works apparently.

Not that I have a clue where I could find out the difference between
plughw and just hw ...


So now I just need to find out how to set index marks when recording
to the minidisc player, so as not to get one big track, and not having
to manually press the "track mark" command on the minidisc player
after each track.
--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Lee Revell
2006-03-01 20:53:06 UTC
Permalink
Post by David Kastrup
What happens now if I do
aplay -D spdif something.wav
? Most certainly not the soundcard with the S/PDIF output gets used.
Instead some nonsense happens.
That can't ever work because we don't have enough information about all
the supported devices to definitively say device $FOO has SPDIF and
device $BAR doesn't. Lots of devices look like they have SPDIF to the
driver but it's not wired up to anything. Etc.

Solving this problem in the way you suggest would require the ALSA
developers having all the details about the hardware that the people who
write the Windows drivers do. This is not going to happen anytime soon.

Lee
David Kastrup
2006-03-01 22:07:17 UTC
Permalink
Post by Lee Revell
Post by David Kastrup
What happens now if I do
aplay -D spdif something.wav
? Most certainly not the soundcard with the S/PDIF output gets used.
Instead some nonsense happens.
That can't ever work because we don't have enough information about
all the supported devices to definitively say device $FOO has SPDIF
and device $BAR doesn't. Lots of devices look like they have SPDIF
to the driver but it's not wired up to anything. Etc.
Solving this problem in the way you suggest would require the ALSA
developers having all the details about the hardware that the people
who write the Windows drivers do. This is not going to happen
anytime soon.
I am not asking for a solution. I am asking for a clue. The man page
to aplay does not mention what a PCM actually is. It just tells you
to list them with -L. It does not mention that you can just tack a :1
after it to specify a different sound card. It does not bother to
mention that the PCM list from -L is basically static and not
depending on the actual available sound card capabilities, applies in
this form only and exclusively to the default sound card, and can be
used for other sound cards by tacking on little cute suffixes like :1.

This sort of stuff is simply undocumented anywhere close to where it
would be used. I have not been able to find it, and I asked man
pages, general ALSA documentation, HOWTOs and Google.
--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Lee Revell
2006-03-01 21:11:45 UTC
Permalink
Post by David Kastrup
The man page
to aplay does not mention what a PCM actually is.
Well the Windows docs don't say what a "Wave device" is but people seem
to figure that out.

PCM is merely a more technical term for what is called "Wave" on
windows.

What does MacOS call it?

Lee
David Kastrup
2006-03-01 22:18:10 UTC
Permalink
Post by Lee Revell
Post by David Kastrup
The man page
to aplay does not mention what a PCM actually is.
Well the Windows docs don't say what a "Wave device" is but people
seem to figure that out.
PCM is merely a more technical term for what is called "Wave" on
windows.
Is that supposed to be some kind of a sick joke? We are talking about
the syntax for specifying PCM devices to ALSA. People don't guess the
syntax for Wave devices just all by themselves.
--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Lee Revell
2006-03-01 21:15:41 UTC
Permalink
Post by David Kastrup
Post by Lee Revell
Post by David Kastrup
What happens now if I do
aplay -D spdif something.wav
? Most certainly not the soundcard with the S/PDIF output gets used.
Instead some nonsense happens.
That can't ever work because we don't have enough information about
all the supported devices to definitively say device $FOO has SPDIF
and device $BAR doesn't. Lots of devices look like they have SPDIF
to the driver but it's not wired up to anything. Etc.
Solving this problem in the way you suggest would require the ALSA
developers having all the details about the hardware that the people
who write the Windows drivers do. This is not going to happen
anytime soon.
I am not asking for a solution. I am asking for a clue. The man page
to aplay does not mention what a PCM actually is. It just tells you
to list them with -L. It does not mention that you can just tack a :1
after it to specify a different sound card. It does not bother to
mention that the PCM list from -L is basically static and not
depending on the actual available sound card capabilities, applies in
this form only and exclusively to the default sound card, and can be
used for other sound cards by tacking on little cute suffixes like :1.
This sort of stuff is simply undocumented anywhere close to where it
would be used. I have not been able to find it, and I asked man
pages, general ALSA documentation, HOWTOs and Google.
If someone wants to set up an "ALSA mini HOWTO wiki" I'll start the
content, by adding the info from this thread. But it can't stay a Wiki
because those fill up with misinformation or extraneous information
eventually - when it's done we'll publish it as a static HOWTO.

Lee
David Kastrup
2006-03-01 22:21:53 UTC
Permalink
Post by Lee Revell
Post by David Kastrup
Post by Lee Revell
Post by David Kastrup
What happens now if I do
aplay -D spdif something.wav
? Most certainly not the soundcard with the S/PDIF output gets used.
Instead some nonsense happens.
That can't ever work because we don't have enough information about
all the supported devices to definitively say device $FOO has SPDIF
and device $BAR doesn't. Lots of devices look like they have SPDIF
to the driver but it's not wired up to anything. Etc.
Solving this problem in the way you suggest would require the ALSA
developers having all the details about the hardware that the people
who write the Windows drivers do. This is not going to happen
anytime soon.
I am not asking for a solution. I am asking for a clue. The man page
to aplay does not mention what a PCM actually is. It just tells you
to list them with -L. It does not mention that you can just tack a :1
after it to specify a different sound card. It does not bother to
mention that the PCM list from -L is basically static and not
depending on the actual available sound card capabilities, applies in
this form only and exclusively to the default sound card, and can be
used for other sound cards by tacking on little cute suffixes like :1.
This sort of stuff is simply undocumented anywhere close to where it
would be used. I have not been able to find it, and I asked man
pages, general ALSA documentation, HOWTOs and Google.
If someone wants to set up an "ALSA mini HOWTO wiki" I'll start the
content, by adding the info from this thread. But it can't stay a Wiki
because those fill up with misinformation or extraneous information
eventually - when it's done we'll publish it as a static HOWTO.
It belongs in the example sections of the man pages, at the very
least. Using some PCM of a secondary sound card is important enough
to be used as an example for _all_ ALSA-related commands. If it is
not there in every man page, at least a clear reference to some man
page where it is is required.

HOWTOs are supposed to be hands-on extracts of the full documentation.
They are not supposed to replace documentation, but give recipes and
outlines that make it easier to work with the normal documentation.
--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Lee Revell
2006-03-01 21:27:32 UTC
Permalink
Post by David Kastrup
HOWTOs are supposed to be hands-on extracts of the full documentation.
They are not supposed to replace documentation, but give recipes and
outlines that make it easier to work with the normal documentation.
Well, tough shit, because all I have time for is a mini HOWTO, unless
you're offering to pay me.

Lee
David Kastrup
2006-03-01 22:40:16 UTC
Permalink
Post by Lee Revell
Post by David Kastrup
HOWTOs are supposed to be hands-on extracts of the full documentation.
They are not supposed to replace documentation, but give recipes and
outlines that make it easier to work with the normal documentation.
Well, tough shit, because all I have time for is a mini HOWTO, unless
you're offering to pay me.
It takes less time to add 2 example lines into the man pages than to
answer complaints on the list.

And it does more for your user base than fixing a dozen bugs.

I happen to be project leader for AUCTeX and preview-latex, and can
tell you from that position that shaving time off basics from the
standard documentation of the standard commands is not going to
improve your total time balance. It will always be more efficient to
answer once for thousands of people than to answer a dozen times for a
dozen people even if hundreds just give up without a peep.
--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Lee Revell
2006-03-01 21:49:26 UTC
Permalink
Post by David Kastrup
Post by Lee Revell
Post by David Kastrup
HOWTOs are supposed to be hands-on extracts of the full documentation.
They are not supposed to replace documentation, but give recipes and
outlines that make it easier to work with the normal documentation.
Well, tough shit, because all I have time for is a mini HOWTO, unless
you're offering to pay me.
It takes less time to add 2 example lines into the man pages than to
answer complaints on the list.
And it does more for your user base than fixing a dozen bugs.
I happen to be project leader for AUCTeX and preview-latex, and can
tell you from that position that shaving time off basics from the
standard documentation of the standard commands is not going to
improve your total time balance. It will always be more efficient to
answer once for thousands of people than to answer a dozen times for a
dozen people even if hundreds just give up without a peep.
Well, I didn't develop ALSA originally, I've just contributed patches to
a few drivers. You'd have to ask the authors on alsa-devel about the
lack of better examples in the man pages.

Most users will never have to bother with this stuff - the app will just
give them a dialog that lets them select a sound device.

Also, a lot of CLI apps don't accept ALSA devices in the standard syntax
because they reserve : for something else (mplayer comes to mind).

Lee
Christoph Eckert
2006-03-01 21:56:58 UTC
Permalink
Post by Lee Revell
Also, a lot of CLI apps don't accept ALSA devices in the standard
syntax because they reserve : for something else (mplayer comes to
mind).
at least it offers the -ao option.


Best regards


ce
Christoph Eckert
2006-03-01 21:55:17 UTC
Permalink
Post by Lee Revell
If someone wants to set up an "ALSA mini HOWTO wiki" I'll start the
content, by adding the info from this thread.
a condensation would be really cool. What about
http://alsa.opensrc.org
?



Best regards


ce
Alfons Adriaensen
2006-03-02 12:43:25 UTC
Permalink
Post by David Kastrup
I am not asking for a solution. I am asking for a clue. The man page
to aplay does not mention what a PCM actually is. It just tells you
to list them with -L.
This is my main gripe with ALSA documentation: it often uses terms
and concepts that are defined nowhere. The english terms used for
some concepts can be hard to decode as well. This is a real problem
with e.g. the doxygen pages for the pcm and seq APIs.
--
FA
James Courtier-Dutton
2006-03-02 12:59:12 UTC
Permalink
Post by Alfons Adriaensen
Post by David Kastrup
I am not asking for a solution. I am asking for a clue. The man page
to aplay does not mention what a PCM actually is. It just tells you
to list them with -L.
This is my main gripe with ALSA documentation: it often uses terms
and concepts that are defined nowhere. The english terms used for
some concepts can be hard to decode as well. This is a real problem
with e.g. the doxygen pages for the pcm and seq APIs.
If you don't like the current documentation, you are welcome to improve it.
Just update the wiki.



This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
Alfons Adriaensen
2006-03-02 13:43:29 UTC
Permalink
Post by James Courtier-Dutton
If you don't like the current documentation, you are welcome to improve it.
Just update the wiki.
I'd be happy to, if only I could just be a bit more confident
about my own knowledge. Currently I'm really in no position to
contribute anything that I'd consider to be reliable information.
When you have to find out things by guessing and trial and error,
and finally arrive at something that works, your understanding
of the underlying concepts could be completely wrong. I hate
to say this, but that's how the MIDI interface in Aeolus was
written. The ALSA audio part (libclalsadrv) was based 100% on
a deconstruction of JACK's ALSA backend.

Just one simple example (from the top of my head, so it may
contain inaccuracies).

In the ALSA seq world, ports can be readable, writeable, and
those two attributes also have variants referred to by the
term 'subscription'. I found out you need these, as otherwise
your ports don't seem to exist at all. So what does this term
really mean ? As far as my understanding goes, its real meaning
is more something like 'public' or 'visible'. But I'm just
guessing...

Also, an app can create any number of ALSA seq 'devices' and
each 'device' can have multiple 'ports'. What is the rationale
behind this two-level scheme ? When would I prefer to use
N single port devices rather that one device with N ports ?
This sort of thing is never explained, AFAICS.

I'm pretty sure all these things (and many other that remain
a mystery) are well designed and the result of considerable
amount of thinking and consideration. But I don't find it
in the documentation. This is often the case with doxygen
documented systems - the top level design documents just
don't exist.
--
FA
Loading...