It seems like it would need very direct rules in the code to just defer to a human tech in the event of not “knowing” the answer.
That would require a wholly different technology with some ability to interpret the things it’s saying and assess their validity. It’s a lot more cost efficient to have your AI spew bullshit and do damage control afterwards.
That would require a wholly different technology with some ability to interpret the things it’s saying and assess their validity. It’s a lot more cost efficient to have your AI spew bullshit and do damage control afterwards.