tabular
"hallucination refers to the generation of plausible-sounding but factually incorrect or nonsensical information"
Is an output an hallucination when the training data involved in that output included factually incorrect data? Suppose my input is "is the would flat" and then an LLM, allegedly, accurately generates a flat-eather's writings saying it is.
Artificial neural networks are simple versions of the neurons arranged in a brain. It's a useful solution when you know what the output should be but you don't know what algorithm would produce it from a desired input. To claim "AI" is learning the same way as complex human brains seems a bit farfetched. If you want to say human brains are ultimately just an algorithm then fine, but look at the outputs between the two.
AI art may not look like duplication but it often looks like derived-work which could trigger copyright infringement (to my non-artist eyes). AI code on the other hand looks much closer to duplication to me and it doesn't seem right they can use other's code to produce code while ignoring the license because the algorithm had "learned like a human". Many software licenses are there to protect users, rather than monopolize, and get totally ignored for profit.
"Innovative" these days seems to means new ways to fuck-over users, rather than the past where it meant products got better and/or cheaper.
Worse still it's not even clear what is being discussed. It implied "violence" but that is a wide range from just pushing to serious shooting.
% can also be misleading when a scale is arbitrary. A temperature increase measured in Fahrenheit will be a rather different % when converted to Kelvin.