This sounds like they’re talking about machine learning models, not the glorified autocorrect LLMs. So the actually useful AI stuff that can be leveraged to do real, important things with large sets of data that would be much more difficult for humans to spot.
That’s not at all what this is doing. It’s a call to make sure businesses out a priority on making these machine learning models less opaque, so you can see the inputs it used, the connections it found at each step to be able to see why a result was given.
You can’t debug a black box (you put in into and get an unexplained output) remotely as easily, if at all