Skip to main content

Get transcript

This endpoint enables you to get the full transcript of a recording.

Before we start

Transcripts must be generated upon request. If you have already generated a transcript, you can use the recordingId to get the transcript.

You are only able to get the transcript of a recording of a meeting. To record a meeting, you must enable the recording feature during the meeting. It can be achieved by either clicking the toolbar button 'Record' at the interface or by invoking the toogleRecording() method on the Video Conference.

How to use

You can use the following cURL to obtain a response on the endpoint:

curl --location --request GET https://api.superviz.com/recordings/transcripts/{recordingId} \
--header 'Content-Type: application/json' \
--header "client_id: ${YOUR CLIENT_ID}" \
--header "secret: ${YOUR SECRET}" \
--header "passphrase: ${YOUR PASSPHRASE}"

HTTP Request

GET https://api.superviz.com/recordings/transcripts/{recordingId}
NameDescription
recordingIdThis field contains the unique identification of the recording, which can be found with the GET Recordings endpoint.

Headers

When using this endpoint, you need to provide the Authorization Bearer token and the passphrase. The following headers are required:

NameDescription
client_idRequired. You need to use the Client ID to authenticate in this repository. You can retrieve your Client ID under the Dashboard > Developer > Keys.
secretRequired. You need to use the Secret Key to authenticate in this repository. You can create a new API Secret under our Dashboard > Developer > Keys.
passphraseRequired. The passphrase of your organization that allows the decryption of the transcript content. You can create a new passphrase under our Dashboard > Developer > Keys.

Response

Status code 200 will indicate that the request was successful. The response will be a list of the spoken content in JSON format and include the following fields:

NameTypeDescription
textstringThis field contains the spoken content.
durationnumberThis field contains the duration of the spoken content in seconds.
userNamestringThis field contains the number of the current speaker. Values will be always Speaker # where the # symbolizes which speaker it is.

NOTE: It does not represent the participant.name provided when initializing a meeting.
Example: In a meeting in which three participants spoke, the first one who spoke would be Speaker 1, Speaker 2 for the second voice, and Speaker 3 would be the one who spoke last.
startTimestringThis field contains the date and time when the spoken content started.
endTimestringThis field contains the date and time when the spoken content ended.
sentimentobjectThis field contains the sentiment analysis of the spoken content.
sentiment.scorenumberThis field contains the sentiment score of the spoken content. The score ranges from -1 to 1, where -1 is the most negative sentiment, 0 is neutral, and 1 is the most positive sentiment.
sentiment.suggestedstringThis field contains the suggested sentiment of the spoken content. The possible values are positive, neutral, and negative.

Example:

[
{
text: "Hello everyone, in the next hour, we will be discussing the latest trends in the film industry.",
duration: 8,
userName: "Speaker 1",
startTime: "2024-06-13T17:09:33.703Z",
endTime: "2024-06-13T17:09:40.103Z",
sentiment: {
score: 0.988,
suggested: "positive",
},
},
{
text: "I am excited to share with you the latest news about the upcoming movie.",
duration: 8,
userName: "Speaker 2",
startTime: "2024-06-13T17:09:33.703Z",
endTime: "2024-06-13T17:09:40.103Z",
sentiment: {
score: 0.988,
suggested: "positive",
},
},
];