Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for realtime API #3714

Open
mudler opened this issue Oct 1, 2024 · 7 comments · May be fixed by #3722
Open

Add support for realtime API #3714

mudler opened this issue Oct 1, 2024 · 7 comments · May be fixed by #3722
Labels
enhancement New feature or request roadmap

Comments

@mudler
Copy link
Owner

mudler commented Oct 1, 2024

Is your feature request related to a problem? Please describe.

OpenAI just extended their API with realtime support with web sockets
https://openai.com/index/introducing-the-realtime-api/?s=09

Describe the solution you'd like

LocalAI should support backends with voice capabilities and introduce a compatible API endpoint with OpenAI clients.

Ideally it should support also function calling as OpenAI does:

Under the hood, the Realtime API lets you create a persistent WebSocket connection to exchange messages with GPT-4o. The API supports function calling(opens in a new window), which makes it possible for voice assistants to respond to user requests by triggering actions or pulling in new context. For example, a voice assistant could place an order on behalf of the user or retrieve relevant customer information to personalize its responses.

Seems that also Chat completion API is gonna have audio output/input too, but API specs are not available yet:

Audio in the Chat Completions API will be released in the coming weeks, as a new model gpt-4o-audio-preview. With gpt-4o-audio-preview, developers can input text or audio into GPT-4o and receive responses in text, audio, or both.

Describe alternatives you've considered

Additional context

#3602
#3722

API docs: https://platform.openai.com/docs/guides/realtime https://platform.openai.com/docs/api-reference/realtime-client-events/session-update

https://github.com/tmc/grpc-websocket-proxy
https://github.com/openconfig/grpctunnel

https://github.com/mudler/LocalAI/tree/feat/realtime

open source models that can handle realtime speech:

@mudler mudler added enhancement New feature or request roadmap labels Oct 1, 2024
@mudler
Copy link
Owner Author

mudler commented Oct 2, 2024

A good candidate VAD library: https://github.com/snakers4/silero-vad/tree/master/examples/go

@mattkanwisher
Copy link
Contributor

I started looking at stubbing out the api, it's mostly just json, curious why you are suggesting the grpc-websocket-proxy?

@mudler
Copy link
Owner Author

mudler commented Oct 2, 2024

I started looking at stubbing out the api, it's mostly just json, curious why you are suggesting the grpc-websocket-proxy?

I was digging a bit into projects that are interfacing with grpc and websockets - was just adding some code/notes here to pick up brain with, very preliminar search, that might be useful as reference/getting some ideas from

@mudler
Copy link
Owner Author

mudler commented Oct 3, 2024

I started looking at stubbing out the api

Are you going to open up a PR? I was about to start playing with it as well, but if you are already taking a stab at it I'd go with #3670 instead :)

Update: Opened #3722 with what I had laying around. Now gonna have a look at vLLM first 🥽

@thiswillbeyourgithub
Copy link

thiswillbeyourgithub commented Oct 12, 2024

A good candidate VAD library: https://github.com/snakers4/silero-vad/tree/master/examples/go

Heard about that one btw : https://github.com/wavey-ai/mel-spec?tab=readme-ov-file

Also copy pasting maybe useful ressources I linked in another repo, as 4o will not be the only one to support this if we want to bot rely too much on openai's code :


i saw on hackernews that agents by livekits used to make the openai realtime api as well as cerebras voice seems to be open source.

They have tons of demos and code on their github. I think there must be a llama-omni implementation somewhere that would be a killer feature for open-webui!

Here's a particularly interesting demo that connects stt + llm + tts: https://github.com/livekit/agents/blob/main/examples/voice-pipeline-agent/minimal_assistant.py

I made an issue to ask for a demo for Llama-Omni, also for kyutai's moshi model. There's also model's moshi implementation : https://github.com/modal-labs/quillman

@theboringhumane
Copy link

I'm trying to build the server implementation based on openai spec for their Realtime API.

https://github.com/iamharshdev/OLlamaGate

@mudler
Copy link
Owner Author

mudler commented Nov 6, 2024

There is a WIP branch over here : #3722

Contribution and feedbacks always welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request roadmap
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants