Artificial Intelligence as Interface
Herbert Simon saw the artificial as an interface between internal and external environments. Today, generative AI is proving him right — as a fantastic intermediary between humans and the digital world.
The first artificial intelligence program was created Herbert Simon in the 1950s, who won some 20 years later the Nobel Prize for... economics!
In his book "The Sciences of the Artificial," Simon writes:
"The artificial can be regarded as a meeting point between an 'internal environment' and an 'external environment,' that is, an interface in today's terms."
Considering that the software he wrote with others in 1956, the Logic Theorist, proved theorems in logic, this seems odd. Yet today, more than ever, it is clear how right Simon was.
The same seems true of generative artificial intelligence, the technology behind ChatGPT and company. It is so smart! Other than interface, it understands!
Yes, it behaves intelligently — but so does a car that accelerates and brakes on its own. That is not why we cry miracle and achievement of the sapiens-machine singularity.
The intelligence we see in AI today is not really its own — it is like a puppet showing the cunning of those pulling the strings. Those hundreds of millions of dollars spent by OpenAI on GPT-4? They were used to pay a lot of people to train it to behave intelligently. Basically, GPT-4 is just very good at playing with words, but it doesn't really understand them.
This brings us to Simon and the artificial seen as an interface between the world inside the artificial itself and our own. Think of generative AI as a fantastic contraption that sits between us and our technological tools. If it is used well, it will do an outstanding job in helping us.
Generative Artificial Intelligence, in our view, will explode as an intermediary between the Sapiens and the digital world. Put somewhat technically, to translate requests made in natural language into programmatic language. For software, understanding "I'd like to come on November 9 at 10:15" is not trivial. On the contrary, for a language model like GPT it is easy, just as it is easy for it to translate it into a "computer" format, i.e., to ask for the year (which it does not know) and generate 2024-11-09T10:15:00+02, which means the same thing but any developer can easily use in his own software code. This is the interface!
This is how we use GPT-type models for now. We take them for what they do best: interpret language. The result? Software — useful software. That doesn't get lost in talk but tries to solve the problem in the best way possible.