.Guarantee compatibility along with numerous platforms, including.NET 6.0,. Internet Platform 4.6.2, and.NET Criterion 2.0 and also above.Reduce dependencies to prevent model conflicts and also the need for binding redirects.Transcribing Audio Record.Some of the primary performances of the SDK is actually audio transcription. Programmers may transcribe audio reports asynchronously or even in real-time. Below is actually an example of how to record an audio documents:.using AssemblyAI.making use of AssemblyAI.Transcripts.var customer = new AssemblyAIClient(" YOUR_API_KEY").var transcript = wait for client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For regional data, comparable code may be made use of to obtain transcription.wait for using var flow = brand-new FileStream("./ nbc.mp3", FileMode.Open).var records = await client.Transcripts.TranscribeAsync(.flow,.brand new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK likewise holds real-time sound transcription utilizing Streaming Speech-to-Text. This component is actually specifically valuable for requests demanding prompt handling of audio data.making use of AssemblyAI.Realtime.wait for making use of var transcriber = new RealtimeTranscriber( brand-new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Final: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for acquiring audio coming from a mic as an example.GetAudio( async (chunk) => wait for transcriber.SendAudioAsync( piece)).wait for transcriber.CloseAsync().Making Use Of LeMUR for LLM Applications.The SDK incorporates with LeMUR to enable designers to build huge language design (LLM) apps on vocal records. Below is actually an example:.var lemurTaskParams = new LemurTaskParams.Prompt="Supply a short summary of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var response = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Intellect Designs.Furthermore, the SDK comes with built-in assistance for audio knowledge models, allowing sentiment study as well as other sophisticated features.var transcript = wait for client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = correct. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// FAVORABLE, NEUTRAL, or even NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more details, explore the formal AssemblyAI blog.Image resource: Shutterstock.