the lyrebird ai has been trained on many, many voices. it's found a more efficient way of sounding like you. the kind of algorithms we are using, it's something called deep learning or neural networks. and something that makes these kind of algorithms special is that you don't need to give them specific things to look for. and so, this dna of the voice, we know that you are able to synthesise new voices based on this and they will sound like the original voices, but we don't really know what is inside of them. and so, it's a bit of a black box. but now, i'm using a prototype of version two, which has been trained using spanish voices. and this is the result. is this notjust the same as taking what i'm saying, turning it into text, and then putting it through an online translation tool and then getting the resulting text and putting it through lyrebird? so not exactly because, for instance, there are some words in spanish, the strong ‘r', that are not common in some other different languages. and so we could make you pronounce that sound,