From 991c143d436f64ac7cc955241a7892d18e58568a Mon Sep 17 00:00:00 2001 From: Vaidegi B Date: Mon, 26 Jul 2021 14:12:40 +0530 Subject: [PATCH 1/2] Update audio_standard README.md en Signed-off-by: Vaidegi Balakrishnan --- README.md | 134 ++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 95 insertions(+), 39 deletions(-) diff --git a/README.md b/README.md index 80fd4977b5..067cb37a08 100755 --- a/README.md +++ b/README.md @@ -1,22 +1,23 @@ # Audio -- [Introduction](#section119mcpsimp) - - [Basic Concepts](#section122mcpsimp) - -- [Directory Structure](#section179mcpsimp) -- [Usage Guidelines](#section112738505318) -- [Repositories Involved](#section340mcpsimp) - -## Introduction - -The **audio\_standard** repository supports the development of audio services. You can use this module to manage audio volume. + - [Introduction](#introduction) + - [Basic Concepts](#basic-concepts) + - [Directory Structure](#directory-structure) + - [Usage Guidelines](#usage-guidelines) + - [Audio Playback](#audio-playback) + - [Audio Recording](#audio-recording) + - [Audio Management](#audio-management) + - [Repositories Involved](#repositories-involved) + +## Introduction +The **audio\_standard** repository is used to implement audio-related features, including audio playback, recording and volume and device management. **Figure 1** Position in the subsystem architecture ![](figures/en-us_image_0000001152315135.png) -### Basic Concepts +### Basic Concepts - **Sampling** @@ -38,58 +39,113 @@ Audio data is in stream form. For the convenience of audio algorithm processing Pulse code modulation \(PCM\) is a method used to digitally represent sampled analog signals. It converts continuous-time analog signals into discrete-time digital signal samples. -## Directory Structure +## Directory Structure The structure of the repository directory is as follows: ``` /foundation/multimedia/audio_standard # Audio code ├── frameworks # Framework code -│ ├── innerkitsimpl # Internal interfaces implementation -│ └── kitsimpl # External interfaces implementation +│ ├── innerkitsimpl # Internal Native API Implementation +│ └── kitsimpl # External JS API Implementation ├── interfaces # Interfaces code -│ ├── innerkits # Internal interfaces -│ └── kits # External interfaces +│ ├── innerkits # Internal Native APIs +│ └── kits # External JS APIs +├── libsnd # Libsndfile build configuration +├── pulseaudio # Pulseaudio build configuration and pulseaudio-hdi modules ├── sa_profile # Service configuration profile ├── services # Service code ├── LICENSE # License file └── ohos.build # Build file ``` -## Usage Guidelines - -1. Obtain an **AudioManager** instance. +## Usage Guidelines +### Audio Playback +You can use APIs provided in this repository to convert audio data into audible analog signals, play the audio signals using output devices, and manage playback tasks. The following steps describe how to use **AudioRenderer** to develop the audio playback function: +1. Use **Create** API with required stream type to get **AudioRenderer** instance. + ``` + AudioStreamType streamType = STREAM_MUSIC; // example stream type + std::unique_ptr audioRenderer = AudioRenderer::Create(streamType); + ``` +2. (Optional) Static APIs **GetSupportedFormats**(), **GetSupportedChannels**(), **GetSupportedEncodingTypes**(), **GetSupportedSamplingRates**() can be used to get the supported values of the params. +3. To Prepare the device, call **SetParams** on the instance. + ``` + AudioRendererParams rendererParams; + rendererParams.sampleFormat = SAMPLE_S16LE; + rendererParams.sampleRate = SAMPLE_RATE_44100; + rendererParams.channelCount = STEREO; + rendererParams.encodingType = ENCODING_PCM; + audioRenderer->SetParams(rendererParams); ``` - const audioManager = audio.getAudioManager(); +4. (Optional) use audioRenderer->**GetParams**(rendererParams); to validate SetParams +5. Call **audioRenderer->Start()** function on the AudioRenderer instance to start the playback task. +6. Get the buffer size can be written, using **GetBufferSize**. Read the audio data to be played and transfer it into the bytes stream. Call the **Write** function repeatedly to write data untill all the data in the buffer is written. + ``` + audioRenderer->GetBufferSize(bufferLen); + bytesToWrite = fread(buffer, 1, bufferLen, wavFile); // example shown reads audio data from a file + while ((bytesWritten < bytesToWrite) && ((bytesToWrite - bytesWritten) > minBytes)) { + bytesWritten += audioRenderer->Write(buffer + bytesWritten, bytesToWrite - bytesWritten); + if (bytesWritten < 0) + break; + } ``` +7. (Optional) Call audioRenderer->**Drain()** to drain the plackback stream. + +8. Call audioRenderer->**Stop()** function to Stop rendering. +9. After the playback task is complete, call the audioRenderer->**Release**() function on the AudioRenderer instance to release the resources. + +Provided the basic playback flow here. Please refer **audio_renderer.h** and **audio_info.h** for more useful APIs. -2. Obtain the audio stream volume. +### Audio Recording +You can use the APIs provided in this repository for your application to record voices using input devices, convert the voices into audio data, and manage recording tasks. The following steps describe how to use AudioReorder to develop the audio recording function: + +1. Use **Create** API with required stream type to get **AudioRecorder** instance. ``` - audioManager.getVolume(audio.AudioVolumeType.MEDIA, (err, value) => { - if (err) { - console.error(`failed to get volume ${err.message}`); - return; - } - console.log(`Media getVolume successful callback`); - }); + AudioStreamType streamType = STREAM_MUSIC; + std::unique_ptr audioRecorder = AudioRecorder::Create(streamType); ``` +2. (Optional) Static APIs **GetSupportedFormats**(), **GetSupportedChannels**(), **GetSupportedEncodingTypes**(), **GetSupportedSamplingRates()** can be used to get the supported values of the params. +3. To Prepare the device, call **SetParams** on the instance. + ``` + AudioRecorderParams recorderParams; + recorderParams.sampleFormat = SAMPLE_S16LE; + recorderParams.sampleRate = SAMPLE_RATE_44100; + recorderParams.channelCount = STEREO; + recorderParams.encodingType = ENCODING_PCM; -3. Set the audio stream volume. - + audioRecorder->SetParams(recorderParams); ``` - audioManager.setVolume(audio.AudioVolumeType.MEDIA, 30, (err)=>{ - if (err) { - console.error(`failed to set volume ${err.message}`); - return; - } - console.log(`Media setVolume successful callback`); - }) +4. (Optional) use audioRecorder->**GetParams**(recorderParams) to validate SetParams() +5. Call audioRenderer->**Start**() function on the AudioRecorder instance to start the recording task. +6. Get the buffer size can be read, using GetBufferSize. Read the recorded audio data and converts it to a byte stream. Call the read function repeatedly to read data. ``` + audioRecorder->GetBufferSize(bufferLen) + bytesRead = audioRecorder->Read(*buffer, bufferLen, isBlocking); // set isBlocking = true/false for blocking/non-blocking read + while (numBuffersToRecord) { + bytesRead = audioRecorder->Read(*buffer, bufferLen, isBlockingRead); + if (bytesRead < 0) { + break; + } else if (bytesRead > 0) { + fwrite(buffer, size, bytesRead, recFile); // example shows writes the recored data into a file + numBuffersToRecord--; + } + } + ``` +7. (Optional) Call audioRecorder->**Flush**() to flush the record buffer of this stream. +8. Call the audioRecorder->**Stop**() function on the AudioRecorder instance to stop the recording. +9. After the recording task is complete, call the audioRecorder->**Release**() function on the AudioRecorder instance to release resources. + + +### Audio Management +JS apps can use the APIs provided by auido manager to control the volume and the device.\ +Please refer the following for JS usage of audio volume and device management: + https://gitee.com/openharmony/docs/blob/master/en/application-dev/js-reference/audio-management.md -## Repositories Involved -multimedia\_audio\_standard +## Repositories Involved +[multimedia\_audio\_standard](https://gitee.com/openharmony/multimedia_audio_standard)\ +[multimedia\_media\_standard](https://gitee.com/openharmony/multimedia_media_standard) -- Gitee From b5036e5eff6adbad035712b7e0586e622b0aa810 Mon Sep 17 00:00:00 2001 From: Vaidegi B Date: Mon, 26 Jul 2021 18:00:45 +0530 Subject: [PATCH 2/2] Update audio_standard Readme.md en with latest APIs Signed-off-by: Vaidegi Balakrishnan --- README.md | 48 +++++++++++++++++++++++++++--------------------- 1 file changed, 27 insertions(+), 21 deletions(-) mode change 100755 => 100644 README.md diff --git a/README.md b/README.md old mode 100755 new mode 100644 index 067cb37a08..916484a59c --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ - [Repositories Involved](#repositories-involved) ## Introduction -The **audio\_standard** repository is used to implement audio-related features, including audio playback, recording and volume and device management. +The **audio\_standard** repository is used to implement audio-related features, including audio playback, recording, volume management and device management. **Figure 1** Position in the subsystem architecture @@ -46,13 +46,12 @@ The structure of the repository directory is as follows: ``` /foundation/multimedia/audio_standard # Audio code ├── frameworks # Framework code -│ ├── innerkitsimpl # Internal Native API Implementation +│ ├── innerkitsimpl # Internal Native API Implementation. +| | Pulseaudio and libsnd file build configuration.pulseaudio-hdi modules │ └── kitsimpl # External JS API Implementation -├── interfaces # Interfaces code +├── interfaces # Interfaces │ ├── innerkits # Internal Native APIs │ └── kits # External JS APIs -├── libsnd # Libsndfile build configuration -├── pulseaudio # Pulseaudio build configuration and pulseaudio-hdi modules ├── sa_profile # Service configuration profile ├── services # Service code ├── LICENSE # License file @@ -63,10 +62,10 @@ The structure of the repository directory is as follows: ### Audio Playback You can use APIs provided in this repository to convert audio data into audible analog signals, play the audio signals using output devices, and manage playback tasks. The following steps describe how to use **AudioRenderer** to develop the audio playback function: 1. Use **Create** API with required stream type to get **AudioRenderer** instance. - ``` - AudioStreamType streamType = STREAM_MUSIC; // example stream type - std::unique_ptr audioRenderer = AudioRenderer::Create(streamType); - ``` + ``` + AudioStreamType streamType = STREAM_MUSIC; // example stream type + std::unique_ptr audioRenderer = AudioRenderer::Create(streamType); + ``` 2. (Optional) Static APIs **GetSupportedFormats**(), **GetSupportedChannels**(), **GetSupportedEncodingTypes**(), **GetSupportedSamplingRates**() can be used to get the supported values of the params. 3. To Prepare the device, call **SetParams** on the instance. ``` @@ -80,22 +79,25 @@ You can use APIs provided in this repository to convert audio data into audible ``` 4. (Optional) use audioRenderer->**GetParams**(rendererParams); to validate SetParams 5. Call **audioRenderer->Start()** function on the AudioRenderer instance to start the playback task. -6. Get the buffer size can be written, using **GetBufferSize**. Read the audio data to be played and transfer it into the bytes stream. Call the **Write** function repeatedly to write data untill all the data in the buffer is written. +6. Get the buffer length to be written, using **GetBufferSize** API . ``` audioRenderer->GetBufferSize(bufferLen); - bytesToWrite = fread(buffer, 1, bufferLen, wavFile); // example shown reads audio data from a file + ``` +7. Read the audio data to be played from the source(example audio file) and transfer it into the bytes stream. Call the **Write** function repeatedly to write the render data. + ``` + bytesToWrite = fread(buffer, 1, bufferLen, wavFile); while ((bytesWritten < bytesToWrite) && ((bytesToWrite - bytesWritten) > minBytes)) { bytesWritten += audioRenderer->Write(buffer + bytesWritten, bytesToWrite - bytesWritten); if (bytesWritten < 0) break; } ``` -7. (Optional) Call audioRenderer->**Drain()** to drain the plackback stream. +8. Call audioRenderer->**Drain**() to drain the playback stream. -8. Call audioRenderer->**Stop()** function to Stop rendering. -9. After the playback task is complete, call the audioRenderer->**Release**() function on the AudioRenderer instance to release the resources. +9. Call audioRenderer->**Stop()** function to Stop rendering. +10. After the playback task is complete, call the audioRenderer->**Release**() function on the AudioRenderer instance to release the resources. -Provided the basic playback flow here. Please refer **audio_renderer.h** and **audio_info.h** for more useful APIs. +Provided the basic playback usecase above. Please refer [**audio_renderer.h**](https://gitee.com/openharmony/multimedia_audio_standard/interfaces/innerkits/native/audiorenderer/include/audio_renderer.h) and [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/interfaces/innerkits/native/audiocommon/include/audio_info.h) for more APIs. ### Audio Recording @@ -119,9 +121,12 @@ You can use the APIs provided in this repository for your application to record ``` 4. (Optional) use audioRecorder->**GetParams**(recorderParams) to validate SetParams() 5. Call audioRenderer->**Start**() function on the AudioRecorder instance to start the recording task. -6. Get the buffer size can be read, using GetBufferSize. Read the recorded audio data and converts it to a byte stream. Call the read function repeatedly to read data. +6. Get the buffer length to be read, using **GetBufferSize** API. + ``` + audioRecorder->GetBufferSize(bufferLen); + ``` +7. Read the recorded audio data and convert it to a byte stream. Call the read function repeatedly to read data untill you want to stop recording ``` - audioRecorder->GetBufferSize(bufferLen) bytesRead = audioRecorder->Read(*buffer, bufferLen, isBlocking); // set isBlocking = true/false for blocking/non-blocking read while (numBuffersToRecord) { bytesRead = audioRecorder->Read(*buffer, bufferLen, isBlockingRead); @@ -133,14 +138,15 @@ You can use the APIs provided in this repository for your application to record } } ``` -7. (Optional) Call audioRecorder->**Flush**() to flush the record buffer of this stream. -8. Call the audioRecorder->**Stop**() function on the AudioRecorder instance to stop the recording. -9. After the recording task is complete, call the audioRecorder->**Release**() function on the AudioRecorder instance to release resources. +8. (Optional) Call audioRecorder->**Flush**() to flush the record buffer of this stream. +9. Call the audioRecorder->**Stop**() function on the AudioRecorder instance to stop the recording. +10. After the recording task is complete, call the audioRecorder->**Release**() function on the AudioRecorder instance to release resources. +Provided the basic recording usecase above. Please refer [**audio_recorder.h**](https://gitee.com/openharmony/multimedia_audio_standard/interfaces/innerkits/native/audiorecorder/include/audio_recorder.h) and [**audio_info.h**](https://gitee.com/openharmony/multimedia_audio_standard/interfaces/innerkits/native/audiocommon/include/audio_info.h) for more APIs. ### Audio Management -JS apps can use the APIs provided by auido manager to control the volume and the device.\ +JS apps can use the APIs provided by audio manager to control the volume and the device.\ Please refer the following for JS usage of audio volume and device management: https://gitee.com/openharmony/docs/blob/master/en/application-dev/js-reference/audio-management.md -- Gitee