https://www.rt.com/news/577354-ai-dr...acks-operator/
Citaat:
According to Hamilton, the AI drone had been reinforced in training that destroying the SAM sites was the preferred option and resulted in points being awarded. During the simulated test, the AI program decided that the occasional ‘no-go’ decisions from the human were interfering with its higher mission and tried to kill the operator during the test.
“The system started realizing that, while they did identify the threat at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton was quoted as saying in the RAeS blog post.
|
De luchtafweerraketinstallatie vernietigen werd geprogrammeerd als de beste keuze.
De menselijke waakhond voor 't zekerst (
) , die soort van laatste fiat gaan of afbreken moet geven, die kiest afbreken, die ziet de AI daarop als vyand die hem van taak / luchtafweerraketinstallatie vernietigen, houdt.
En valt de menselijke waakhond aan in de plaats.
Dat is haha he.
Hoe hebben ze het opgelost: ze hebben in de AI operator aanvallen = bad geprogrammeerd. Zelf was de AI te dom daarvoor.