[Index] [Examples] [FAQ] [Man pages] [Design] [Source tree] [ecasound home page]


ecasound documentation - design


Kai Vehmanen

260799

Table of Contents

1: Preface

2: Development

2.1: Open design

2.2: System vs interface

2.3: Use cases

2.4: Sanity checks

3: Common use-cases

3.1: Simple non-interactive processing

3.2: Multitrack mixing

3.3: Realtime effect processing

3.4: One-track recording

3.5: Multitrack recording

3.6: Recycling a signal through external devices

4: Signal flow

5: Control flow

5.1: Passive operation

5.2: Interactive operation

6: Class descriptions

6.1: Core

6.1.1: ECA_PROCESSOR
6.1.2: ECA_SESSION
6.1.3: ECA_CONTROLLER

6.2: General

6.2.1: ECA_CHAINSETUP
6.2.2: ECA_RESOURCES

6.3: Data objects

6.3.1: CHAIN
6.3.2: SAMPLEBUFFER

6.4: Audio input/output

6.4.1: AUDIO_IO_DEVICE

6.5: Misc

6.5.1: DEBUG



1: Preface

Notice! This is not the actual class/source code documentation. This page is a collection of design notes and thoughts. I've noticed that by trying to write a good class description (roles, interface, use), you quickly find out about possible desing flaws.

2: Development

2.1: Open design

Although specific use-cases are used for testing design concepts, they are not be considered as development goals.

2.2: System vs interface

System design is kept separate from user interface design and feature implementation.

2.3: Use cases

Use-cases are used extensively when designing the user interface.

2.4: Sanity checks

Sanity checks are done only to prevent crashes. All effects and operators happily accept "insane" parameters. :)

3: Common use-cases

3.1: Simple non-interactive processing

One input is processed and then written to one output. This includes effect processing, normal sample playback, format conversions, etc.

3.2: Multitrack mixing

Multiple inputs are mixed into one output.

3.3: Realtime effect processing

There's at least one realtime input and one realtime output. Signal is sampled from the realtime input, processed and written to the realtime output.

3.4: One-track recording

One input is processed and written to one or more outputs.

3.5: Multitrack recording

The most common situation is that there are two separate chains. First one consists of realtime input routed to a non-realtime output. This is the recording chain. The other one is the monitor chain and it consists of one or more non-realtime inputs routed to a realtime output. You could also route your realtime input to the monitoring chain, but this is not recommended because of severe timing problems. To synchronize these two separate chains, ecasound uses a special multitrack mode (which should be enabled automatically).

3.6: Recycling a signal through external devices

Just like multirack recording. The only difference is that realtime input and output are externally connected.

4: Signal flow

All the necessary information about signal flow is found from CHAIN objects. Currently signals can't be redirected from one chain to another. You can still assing inputs and outputs to multiple chains.

5: Control flow

5.1: Passive operation

When ecasound is run in passive mode, the program flow is simple. A ECA_SESSION object is created with suitable parameters, it is passed to a ECA_PROCESSOR object and that is all. Once engine is started, it does the processing and exits.

Another way to do passive processing is to create a ECA_CONTROLLER object and use it to to access and modify the ECA_SESSION object before passing it to ECA_PROCESSOR.

5.2: Interactive operation

In interactive mode, everything is done using the interface provided by ECA_CONTROLLER. This is when things get complex:

ECA_SESSION object can contain many ECA_CHAINSETUP objects, but only one of them can be active. On the other hand it is possible that there are no chainsetups. If this is the case, about the only thing you can do is to add a new chainsetup.

When some chainsetup is activated, it can be edited using the interface provided by ECA_CONTROLLER. Before actual processing can start, the chainsetup must first be connected. Only valid chainsetups (at least one input-output pair connected to the same chain) can be connected.

ECA_CHAINSETUP can be...

inactive
- can't be accessed from ECA_PROCESSOR

activated, invalid
- can be edited (files and devices are not opened)

activated, valid
- can be connected (files and devices are not opened)

connected
- ready for processing (files and devices are opened before connecting)

ECA_PROCESSOR status is one of...

not_ready
- ECA_SESSION object is not ready for processing or ECA_PROCESSOR hasn't been created

running
- processing

stopped
- processing hasn't been started or it has been stopped before completion

finished
- processing has been completed

6: Class descriptions

6.1: Core

6.1.1: ECA_PROCESSOR

ECA_PROCESSOR is the actual processing engine. It is initialized with a pointer to a ECA_SESSION object, which has all information needed at runtime. Processing is started with the exec() member function and after that, ECA_PROCESSOR runs on its own. If the interactive mode is enabled in ECA_SESSION, ECA_PROCESSOR can be controlled using the ECA_CONTROLLER class. It offers a safe way to control ecasound. Another way to communicate with ECA_PROCESSOR is to access the ECA_SESSION object directly.

6.1.2: ECA_SESSION

ECA_SESSION represents the data used by ecasound. A session contains all ECA_CHAINSETUP objects and general runtime settings (iactive-mode, debug-level, etc). Only one ECA_CHAINSETUP can be active at a time. To make it easier to control how threads access ECA_SESSION, only ECA_PROCESSOR and ECA_CONTROLLER classes have direct access to ECA_SESSION data and functions. Other classes can only use const members of ECA_SESSION.

6.1.3: ECA_CONTROLLER

ECA_CONTROLLER represents the public interface offered by ecasound engine. It takes string commands and interprets them. Then it either performs the task itself or passes the command to the engine (ECA_PROCESSOR). It also offers functions for modifying ECA_SESSION data. In some rare cases (likes for instance when quitting ecasound) it throws an exception.

6.2: General

6.2.1: ECA_CHAINSETUP

ECA_CHAINSETUP represents a group of CHAINs and info about how they are connected. ECA_CHAINSETUP can be constructed from a COMMAND_LINE object or it can be loaded from a ascii file. It's also possible to save a ECA_CHAINSETUP to a simple ascii file. The syntax is identical to command line syntax.

6.2.2: ECA_RESOURCES

This class is an interface to the ~/.ecasouncrc configuration file.

6.3: Data objects

6.3.1: CHAIN

A CHAIN represents a single signal flow abstraction. Every CHAIN has one slot for an audio input device and one slot for output. Because these device objects are unique and can be assigned to multiple chains, id numbers are used to refer to the actual device objects. In addition, CHAIN has a name, a SAMPLE_BUFFER object and a vector of CHAIN_OPERATORs which operate on the sample data.

6.3.2: SAMPLEBUFFER

Basic unit for representing sample data. The data type used to represent a single sample, value range, channel count, sampling rate and system endianess are all specified in "samplebuffer.h".

6.4: Audio input/output

6.4.1: AUDIO_IO_DEVICE

AUDIO_IO_DEVICE is a virtual base class for all audio io-devices. Audio devices can be opened in the following modes: read, write or read_write. Input and output routines take pointers to SAMPLE_BUFFER objects as their arguments. All devices are either non-realtime (normal files) or realtime (soundcards). Realtime means that once device is started, data input/output doesn't depend on calls to AUDIO_IO_DEVICE object. With non-realtime devices, input and output is done only when requested. Due to performance issues, much of the responsibility is given to the user of these classes. It's not impossible to write to an object opened for reading (probably with disastrous results).

6.5: Misc

6.5.1: DEBUG

Virtual interface class for the debugging subsystem. Ecasound engine sends all debug messages to this class. The actual implementation of this class can be done in many ways. For example in the console mode version of ecasound, TEXTDEBUG class is used to implement the DEBUG interface. It sends all messages that have a suitable debug level to the standard output stream. On the other hand, in qtecasound DEBUG is implemented using a Qt widget.