Skip to content

Instantly share code, notes, and snippets.

@patrick-samy
Last active August 27, 2024 18:13
Show Gist options
  • Save patrick-samy/cf8470272d1ff23dff4e2b562b940ef5 to your computer and use it in GitHub Desktop.
Save patrick-samy/cf8470272d1ff23dff4e2b562b940ef5 to your computer and use it in GitHub Desktop.
Split large audio file and transcribe it using the Whisper API from OpenAI
import os
import sys
import openai
import os.path
from dotenv import load_dotenv
from pydub import AudioSegment
load_dotenv()
openai.api_key = os.getenv('OPENAI_API_KEY')
audio = AudioSegment.from_mp3(sys.argv[1])
segment_length = 25 * 60
duration = audio.duration_seconds
print('Segment length: %d seconds' % segment_length)
print('Duration: %d seconds' % duration)
segment_filename = os.path.basename(sys.argv[1])
segment_filename = os.path.splitext(segment_filename)[0]
number_of_segments = int(duration / segment_length)
segment_start = 0
segment_end = segment_length * 1000
enumerate = 1
prompt = ""
for i in range(number_of_segments):
sound_export = audio[segment_start:segment_end]
exported_file = '/tmp/' + segment_filename + '-' + str(enumerate) + '.mp3'
sound_export.export(exported_file, format="mp3")
print('Exported segment %d of %d' % (enumerate, number_of_segments))
f = open(exported_file, "rb")
data = openai.Audio.transcribe("whisper-1", f, prompt=prompt)
f.close()
print('Transcribed segment %d of %d' % (enumerate, number_of_segments))
f = open(os.path.join('transcripts', segment_filename + '.txt'), "a")
f.write(data.text)
f.close()
prompt += data.text
segment_start += segment_length * 1000
segment_end += segment_length * 1000
enumerate += 1
@ceinem
Copy link

ceinem commented Apr 29, 2024

Hi, thanks for sharing this code. Just as a warning, in line 21 you use int() to find the number of segments. This will lead to dropped endings, as it rounds down. e.g. you have a length of 3200sec, your code will determine it's 2 segments of 1500sec and drop the last 200sec. math.ceil() would be the correct function.
Also I believe your openAI API is not quite up to date anymore, I had to adjust it to:

from openai import OpenAI
client = OpenAI()
data = client.audio.transcriptions.create(model="whisper-1", file=f)

@tgran2028
Copy link

tgran2028 commented Jun 23, 2024

@ceinem

Here is a fix.

import math
from pydub import AudioSegment


MAX_SEGMENT_LENGTH: int = 10  # minutes


audio = AudioSegment.from_file('...some-audio-file.mp3')

# total number of segments 
n_seg: int = math.ceil(audio.duration_seconds / (MAX_SEGMENT_LENGTH * 60))

# duration per segment in seconds. Consistent for all segments.
seg_duration: float = audio.duration_seconds / n_seg

# can splice audio 
segs: list[AudioSegment] = [
    audio[i * seg_duration * 1000:(i + 1) * seg_duration * 1000]
    for i in range(n_seg)
]

# can now concurrently transcribe with OpenAI 

@parterburn
Copy link

I'm curious why you're adding the previous segment of transcription into the prompt for future segment transcriptions here? Docs from OpenAI says that prompt ignores anything over 224 tokens.

In addition, the prompt is limited to only 224 tokens. If the prompt is longer than 224 tokens, only the final 224 tokens of the prompt will be considered; all prior tokens will be silently ignored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment