On the phenomenon of LLM sensitivity to prompting choices through two core linguistic tasks and categorize how specific prompting choices can affect the model's behavior.
This is the thing about LLMs and AI/ML in general that I think people need to understand (especially corporate higher-ups throwing money at AI to save costs of labor): it doesn’t learn anything. It doesn’t generate anything novel. It finds patterns and repeats those patterns in, ultimately, pseudorandom ways (which leads to nondeterminism, which makes things look more impressive than they actually are).
We’ll see tremendous stagnation as AI adoption increases, because you simply can’t feed it enough data if so much is already coming from AI to begin with. The entropy among the patterns reduces.
This is the thing about LLMs and AI/ML in general that I think people need to understand (especially corporate higher-ups throwing money at AI to save costs of labor): it doesn’t learn anything. It doesn’t generate anything novel. It finds patterns and repeats those patterns in, ultimately, pseudorandom ways (which leads to nondeterminism, which makes things look more impressive than they actually are).
We’ll see tremendous stagnation as AI adoption increases, because you simply can’t feed it enough data if so much is already coming from AI to begin with. The entropy among the patterns reduces.