I would say that tracing concepts back to their origins could be a good benchmark for evaluating AI models.
In the context of the article you mentioned, Robin Dunbar’s research and the Max4 Principle are referenced [1], along with a relevant Wikipedia article [2]. Expanding further, one can trace earlier foundational works, such as “The Primary Group as Cooley Defines It” [3], and even earlier sociological contributions by thinkers like Auguste Comte.
As for the “Serve it Forth” answer, I haven’t come across that yet except when, obviously?, I named it explicitly.
Thinking out loud here...That could be like an LLM that is tokenized on ideas as opposed to words/fragments, no?
I'm sure someone has worked on this, but how would a computer extract an idea from a bunch of words (sentences?) ?
Looks like another rabbit hole to add to the list...