you know, i think speaking from the perspective of bsa, the business software alliance, we representmpanies, we recognize that al presents opportunities, but we also need to promote trust and responsibility. bsa members have long recognized that, they've committed to it, and that's why we welcome steps towards an effective regulatory framework. if you look at the framework that has been introduced by the white house, it includes this commitment on al safety. quote, "the companies commit to internal and external security testing of their ai systems before their release". "this testing, which will be carried out in part by independent experts, guards against some of the most significant sources of ai risks, such as biosecurity and cybersecurity, as well as its broader societal effects". so from your perspective at the bsa, what would be effective in making sure that these systems are tested externally? i mean, who should be testing these systems and how to make sure, of course, that the companies also adopt recommendations? i think that the announcement the administration has made is a