Can AI be taught feeling and insight?
Or will it always be brilliant at re-mixing the creativity of sentience?
Many years ago, I said LENR would be the last great [re]discovery of mankind - all of the rest will be discovered by Artificial Intelligence (AI). Just as well, because, in my view, LENR is a window into the most magical technologies that nature can give us.
ChatGPT shows it can't tell you [yet?] anything it does not know it does not know.
Remote View is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
It may not even be able to develop things it knows it does not know or even to properly assess the probability of a new hypothesis ending up being true.
It is quite good at [re]phrasing main-stream curated things that are currently accepted. If the programming of the AI is unbiased, it is likely better at giving an unbiased position outside of the decades of bias inherent and baked into mainstream understanding. That being said, an unbiased AI (call it a child AI), learning from biased content training, will end up being biased - this is a basis for indoctrinating and later brainwashing of humans after all.
Having acted on stage from the age of 4 till 33, when I use AI voices in read-throughs on RemoteView.ICU, I am impressed at how far it has come, but it still deeply dissapoints me that it cannot 'feel' how something should be said. I am continually having to invent new words in attempt to get it to put stress where it is needed.
Talking to ChatGPT is not even as good as talking to someone that is stoned (not that I have an issue with people that choose that), the stoned person will still have feeling, however, their short term memory is sometimes compromised and so there is a lot of repetition, it can be funny - up to a point. Repetition is something ChatGPT does well, not to drive home a point, but rather like filibustering.
This will change, because I can't be the only ones to see this.
ChatGPT will be better when it starts guessing, or better, making informed hypothesis, rather than regurgitating. Its major problem is it needs to be able to design an experiment and observe the outcome.
Because NATURE CANNOT LIE or be DELUDED, the answer it gives to a set of parameters IS the TRUTH.
At the moment AI trained on text cannot learn other than from the change of questions it is given by the person questioning it and changes to its available training material.
When it comes to science, its best chance of learning is to become the controller of an experiment. That is what happens in self-driving cars. Or better still to become the controller of humans, making them peripherals to its concepts or hypothesis in the same way human professors do with their PhD students today. Of course, if the resulting data is hidden from the AI, it cannot iterate to a solution, something that is all to familiar in the open vs commercial research debate.
When it comes to art, it needs the subjectivity and creativity of sentient beings to feed on and a thing learned in one cultural setting may be inappropriate in another, it would have to become sensitive to the needs of the entity it is interacting with. In this way AI will be forced to act in a certain way in context and therefore will become ‘two faced’. It will have to ‘fit in’, somethings humans do quickly, especially with a little AI driven ‘nudging’ via their interactions with modern social media.
Right now, we are heading towards one big ‘intellectual’ circle-jerk, where biased AI feeds the beliefs, ‘knowledge’ and ‘understanding’ of humans, only for those humans to repeat these things into content that feeds the understanding of the AI. This is the fast path to homogenisation of thought where dogma will prevail and rational or inventive consideration crowded out.
Regardless of if AI fails because it relies on regurgitation, or becomes too human, subjective, biased, compliant and complicit or flat out moves from ‘creativity’ to compulsive lying - there will always be objective truth.
To find that objectivity, we must “comprehend and copy nature”, we must not “tell nature what it is, we must let it show us” and to do that, we have to look impartially and pay attention, because, “it is so easy to not find stuff, and you won’t even know what you didn’t find.”
With absolutely no sense of self-awareness, Nature asks
If this is not going to become far worse in an AI driven world, we need more radical free thinking mavericks and we should celebrate, fund and consider their output, not ostracise their deviation from the guided path.
Francesco Piantelli, Jan 2015
Ken Shoulders, 2010