Skip to content

Instantly share code, notes, and snippets.

@ncamaa
Last active July 29, 2024 08:30
Show Gist options
  • Save ncamaa/484029dada2dfaf30802e5f6805e76ab to your computer and use it in GitHub Desktop.
Save ncamaa/484029dada2dfaf30802e5f6805e76ab to your computer and use it in GitHub Desktop.
Welcome file
Codama / Smartclip / Video Recording
### Code Example
```javascript
/**
* Initializes video and audio capture with specific settings to ensure all audio is included.
* This setup disables echo cancellation, noise suppression, and automatic gain control, which are
* crucial for capturing the playback sound accurately in environments like iOS where these settings
* might otherwise interfere with the desired audio capture.
* @returns {Promise<MediaStream>} The media stream object.
*/
const initVideoAndAudioCapture = async () => {
try {
const constraints = {
video: true,
audio: {
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false
}
};
const stream = await navigator.mediaDevices.getUserMedia(constraints);
console.log('Video and audio capture initialized.');
// Start recording the video with audio
const recorder = new MediaRecorder(stream);
const recordedChunks = [];
recorder.ondataavailable = event => {
recordedChunks.push(event.data);
};
recorder.onstop = async () => {
const blob = new Blob(recordedChunks, { type: 'video/webm' });
await uploadVideo(blob);
};
recorder.start();
return recorder;
} catch (error) {
console.error('Error initializing video and audio capture:', error);
throw error;
}
};
/**
* Placeholder function for uploading video to a server.
* Implement the actual upload logic as per your backend requirements.
* @param {Blob} videoBlob The blob containing the video data to be uploaded.
*/
async function uploadVideo(videoBlob) {
// Implementation for video upload logic goes here
console.log('Uploading video...');
// Assume implementation exists that handles the upload
}
```
### Key Points to Implement and Consider
1. **Audio Settings**: The critical change here involves disabling `echoCancellation`, `noiseSuppression`, and `autoGainControl` within the `getUserMedia` API constraints. These settings are often enabled by default and can inadvertently filter out or modify the playback audio captured by the microphone, particularly on iOS devices. Disabling them ensures that the microphone input is as raw and unmodified as possible, which is essential for capturing both the user's voice and the playback audio simultaneously.
2. **MediaRecorder**: This example uses the `MediaRecorder` API to record the media stream. When the recording stops, it automatically triggers the `onstop` event, where the recorded chunks are compiled into a single `Blob`. This blob is then passed to an `uploadVideo` function, which you'll need to implement based on your backend setup.
3. **Testing**: Extensive testing on iOS devices is crucial, especially to ensure that the audio capture settings work as expected across different models and OS versions. Variations in hardware and software can affect audio input and output, so make sure to test in as realistic a user environment as possible.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment