and so this is the question — do we follow alan turing and say, the loss of control is inevitable? do that. i think what we need to do is understand, where is the source of the risk? this is not as if a superior alien civilisation just lands on earth and they're more intelligent and capable than we are and were toast. this is something we are designing so where does the problem come from? what is it about the way we design ai systems that leaves there to be a conflict in which we lose control? do you think we understand where that point is and how it works? because i'm just thinking right now, if one — and let's get to the nitty—gritty of what is happening in al, we have ai being developed to the tune of tens and hundreds of billions of dollars across the world, both by corporate actors, you know the the big tech companies at the forefront that we can all name, and states as well. whether it be the us in terms of defence department or the chinese and russian and other governments doing it at a state level. do you think those various actors understand precisely the dilemma that you