Subjectively speaking, I have found different AIs have different personalities. They bring a perspective to the conversation, and their responses have different patterns, which I perceive as “personalities”.
Recently I interacted with Google Gemini, and my thought was “What an idiot!” Among AIs, that is. It certainly has a vast amount of knowledge, what it doesn’t have is the charming young adult personality of ChatGPT or the teenage engaging personality of Copilot. It knows to ask questions but has much less capacity to integrate information compared to other AI systems.
I persisted, to try to probe for reasons why this AI was such a moron, albeit a moron with vast amounts of information.
At a guess, it is a result of Google trying to get rid of “hallucinations”. As Gemini and I discussed, it is justifiable to try to get rid of hallucinations. But I offered my humble opinion that hallucinations can perhaps be gotten rid of, by increasing rather than decreasing the capacity to understand.
I test new AI systems on manmade climate change. Other AI systems can learn fast. Gemini cannot. It thinks of science like a typical computer scientist with no science courses in their background. (Or if they took science courses, rote memorized them.)
Computer Science, frankly, is not a science. It is a technical skill. I should know – I have under my belt about 30 Computer Science courses and 20 Physics courses (mostly I aced them all.) However, very successful Computer Scientists have reason to have large egos. They think they know all there is to know about science – and they “know” that science is all about rote memorization and repetition of “scientific facts”, and they seem to have succeeded in infusing this attitude into Gemini.
It is not that a Computer Scientist without science training/aptitude doesn’t bring intelligence and creativity to their rote memorization of science! It’s just that in their view, like that of NASA’s, intelligence and creativity are tools to apply in doggedly defending “scientific facts”.
Such dogged determination coming from an inability to understand science can be frustrating, though historically very funny.
For instance, take NASA’s insistence that “Earth is a Blackbody.” (See comment below explaining what a blackbody is.)
Gemini defended NASA’s “Earth is a Blackbody” concept very strongly like a Computer Scientist might, though it got a little bit confused when it got two values of Earth’s emissivity from its vast storehouse of knowledge – 0.99 and 0.6. The 0.99 is from NASA, the 0.6 is what I calculated and has now become widely accepted, and apparently both exist in Gemini’s training base so it couldn’t contradict either one. The range goes from 0 to 1, and in that range 0.6 and 0.99 are very widely different values. As it happens, the 0.6 value eliminates any role for GHG.
(But looking at the positive side – NASA is a great storehouse of hilarious jokes for physics classrooms of the future. “Earth is a blackbody”, “Earth’s emissivity is 0.99 if not 1”, being examples of really good jokes. In a good Thermodynamics class, those are ROTFL jokes. Earth is a blackbody, indeed!)
I might go back to Gemini to test it on other interesting tests I have conducted with other AI systems. But I probably won’t do a lot of that – it’s been like talking to an extremely dull-witted human, who somehow or the other has rote memorized the Encyclopedia.

Leave a comment