the founder and head of invidia, jenson huang, presented an expanded set of ssegraph at the conferencebots with built-in artificial intelligence. the general idea is to simplify this process and automate it as much as possible. particularly as a primary source of training data. the robot's veronet uses the actions of a live trainer captured by the apple vision pro headset. this solves two problems: firstly, unlike terabytes of texts for training large language models, there is very little structured data about physical interaction with the world, but secondly, nvidia came up with an additional justification for the existence of this mixed reality device itself, which has incredible capabilities, but which apple itself is promoting almost exclusively as... entertainment, and so, nvidia offers ai trainers to look at the physical world through the eyes of a robot to perform certain text test operations. all actions are captured by the sensors of the vision pro headset, based on this data, which is still not enough. eurocity generates a large synthetic data set. well, then the real one and