AI is available in various packages (programs) and each have owner-adjustable settings for correctness, tone, writing level, and humor. In general though, all of the commercially available (eg ChatGPT) mine the internet for info, but don't go deeper into publications, etc. So if the answer/ info isn't commonly available (e.g. on Wikipedia) or "conventional wisdom" the answer will be wrong.adamcotton wrote: ↑Wed Apr 24, 2024 9:34 pm I found this post strange, but I'm not sure it's AI ... wouldn't AI know that species names are spelt with a first small letter, and only genus names with a capital?
Adam.
Further, there's adjustable settings for things like "make stuff up"; a NASA AI engineer told me that's intentional because the AI chat is trying to get human users to correct it, and then it stores that information.
What AI is fascinatingly adept at is generating well written content that emulates English; at the asking this can be adjusted for competence (e.g., school grade level, and US vs UK vocabulary usage.)
Typically, the give-away on AI-generated content is a mix of in-depth knowledge with perplexing stupidity. That's what we have here- names of species, chemicals, etc. but a total lack of common sense. AI itself, asked to write a simple story, won't come up with this content- somebody asked it to write an absolutely outrageous story, which is what was posted. Problem is, AI doesn't know enough to not employ species names and chemical details. So the hand is tipped.
There are other give-aways: west coast and NYC, which are well known and the home of most AI programmers/ owners. If it had picked, for example, the source location to be Coeur D'Alene Idaho the story would fall apart because virtually everyone from that area (1) has the common sense to know this is impossible, and (2) doesn't have the money to throw clothing away every three days. Though the fact that it did not pick Coeur D'Alene isn't to make the story more believable; again, it picked or was told "west coast."
The spelling, grammar, and punctuation I've not experimented with. Usually, AI will generate these to perfection, so I'm unsure if a user can tell Chat to intentionally screw up, or if these were later hand-edited. "0.5% Permethrin" and later "permathrins" is not, AFAIK, AI (yet.) This then indicates to me that the content was generated (by AI upon request) not by a bot to mine data, but by a human with some other motive.
Another clue to AI generation is the tone. It tries to make it believable, which is actually VERY difficult. Remember the US newspapers' "Dear Abby" column, in which dingbat writers would ask for advice about challenges that pretty much everyone knew how to handle? One particular college (forgot which one) was adept at getting spoof questions published- but IIRC under 20 of these made it through in a period spanning decades. AI on the other hand is effectively a master of manipulation; in many ways it can out-think the humans appointed to screen it. This is scary, because AI is just a baby.
In some cases, and we will see here, AI will return to "defend" itself with well written excuses and more story. This though, right now, is rare. Besides which, as I said while this is AI generated, there's a human involved. This in itself is somewhat of a blessing, as is the fact that the whole thing is a spoof, because if this were truly human-generated based on a perceived series of events we have to keep in mind that this person drives and votes.
You can expect AI to be mature enough within five years that it will generate content on detailed subjects for which humans will not be able to ascertain who/ what wrote it. Like any tool, it will be used for entertainment, politics, and crime, what a shame.