this is hosted in the studio by philip trafimov. the eleven labs neural network project, known for its ability to generate a human voice surprisingly well, has now learned to create background sounds for dubbing videos, or rather, learned a little earlier, but now the source codes of this tool have been made publicly available, the tool itself has been modified to a fully automatic state, saving humans even from the need to formulate. request, fortunately, the process now involves two generative neural networks, it works as follows: the user uploads a video lasting up to 22 seconds, and from each second the program pulls out four separate frames and sends them to gpt4o, this is the latest version of the neural network from open ai, which already recognizes the image and formulates a text query, describing what and how it could sound, according to this prompt, the eleven labs generative model creates the sound design at the output. four options, you can listen to them and download them already superimposed on the original video, for