Asking AI the wrong questions

Menlo Ventures recently surveyed 600-plus enterprises to gauge AI adoption. Perhaps unsurprisingly, software development tops the list of use cases, with 51% adoption across those surveyed. This makes sense because ChatGPT and other tools offer fast-track access to developer documentation, as Gergely Orosz found. Developers have gone from asking questions on Stack Overflow to finding those same answers through GitHub Copilot and other tools. GenAI may not be as good an option to solve other enterprise tasks, however.

This is because ultimately genAI isn’t really about machines. It’s about people and, specifically, the people who label data. Andrej Karpathy, part of OpenAI’s founding team and previously director of AI at Tesla, notes that when you prompt an LLM with a question, “You’re not asking some magical AI. You’re asking a human data labeler,” one “whose average essence was lossily distilled into statistical token tumblers that are LLMs.” The machines are good at combing through lots of data to surface answers, but it’s perhaps just a more sophisticated spin on a search engine.

That might be exactly what you need, but it also might not be. Rather than defaulting to “the answer is genAI,” regardless of the question, we’d do well to better tune how and when we use genAI. Again, software development is a good use of genAI right now. Having ChatGPT write your thought leadership piece on LinkedIn, however, might not be. (A recent analysis found that 54% of LinkedIn “thought leadership” posts are AI-generated. If it’s not worth your time to write it, it’s not worth my time to read it.) The hype will fade, as I’ve written, leaving us with a few key areas in which artificial intelligence or genAI can absolutely help. The trick is not to get sucked into that hype and focus on finding significant gains through the technology, instead.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *