Ever since late last year, after the release of ChatGPT-3, I've seen countless posts about 'detecting AI'. I have watched, and been quiet... after 30+ years being knee deep in every cutting edge technological development related to content and image production, I have at least an informed opinion- and here it is: You will not be capable of detecting AI generated content. The idea of such a capability was folly from the beginning, but catchy enough to be a good way to make money selling 'detection tools'. Humans are obsessed with being able to distinguish between "real" content, and that which is... uh, "fake", I guess. What makes readable, logical, interesting content "fake" seems about as important as determining whether or not we live in "the matrix" (a simulation) or not. If it seems real, it's real- that's the only test any of us need concern ourselves with- the rest is a metaphysical minefield I recommend walking around vs through.
This is not meant to discourage, but simply to inform, and encourage focus on reality. One must operate from a point of truth, and avoid the distractions of pursuing pointless goals. AI, guided by humans, will do a better job of covering its tracks than any method of fraud detection can every hope to overcome. Before long, false positives will emerge at a higher rate than actual detection, and the AI will learn all the while how to better create content which is indistinguishable from that produced by people. Upon editing AI generated content, the entire subject is moot. Once touched by a human, it is a human creation. Upon approval and publication, it is the product of the human who directed the AI. I believe this is essentially true even for automatic processes, if they were originally established by a human being.
So, don't worry about whether something was AI generated or not. Forestall the instinct to ad hominem the robot. Focus on the message, not how it was created. Surf this wave, do not fight the ocean.
Kommentit