With AI, common sense is uncommon (part 3 of 3

Common sense is surprisingly uncommon, particularly in the realm of artificial intelligence. Machines often falter when tasked with making nuanced distinctions that humans intuitively grasp. This is precisely why websites implement measures to verify your humanity before granting access or processing transactions: the majority of bots are incapable of differentiating between a crosswalk and a zebra.

On one hand, we have AI systems capable of remarkable achievements, yet on the other, they often stumble over trivial errors. AI development typically focuses on creating specialized agents for individual tasks. The goal is to establish comprehensive repositories of commonsense knowledge, enabling AI agents to excel across a broad spectrum of tasks. Experts, crowdsourcing, and analyzing vast amounts of text are just a few of the methods researchers employ to bolster commonsense reasoning in artificial intelligence. These diverse knowledge sources prove valuable when dealing with incomplete information. By incorporating everyday assumptions into their reasoning processes, AI agents can make informed inferences in both familiar and unforeseen scenarios.

With a robust knowledge base, AI can generate new ideas and solutions using computational thinking and creativity. However, significant gaps remain. Simple tasks, like identifying whether two things are the same or different, often challenge AI. There is still a clear divide between human intuition and AI’s capabilities, for instance, a challenge for AI is interpreting emotions. Comprehending human motivation remains a principal challenge in commonsense AI. Emotions are inevitable when dealing with human behavior, however, much of what shapes emotional responses remains hidden, making this a complex challenge.