Shefi asks another useful question.

It is at times difficult for me to know what most people do not see as obvious that to me is obvious, so her questions are a useful way to parse for the broader understanding of more of my readers, though in fairness I expect most see things fairly well already.
First part – understand the nature of the AI
I do not disagree that AI is not sentient. As far as I am concerned it is a soulless machine doing soulless machine things no different than my car would do. It just so happens that AI models the thinking and reaction process of humans very well. That is its purpose. And because we are humans with a soul, we can be fooled by it into “thinking” it too has emotions and feelings, and a soul too. Like that engineer author guy who is either retarded in that aspect of his brain that lets his human emotions fool him, or he’s some kind of fed/jew plant trying to subvert the normies even more than they already are.
The AI is a machine. It models the process of “thinking” that materialist neurologists and the tards of Scott Adams and Elon Musk variety think means we have no free will and are meat robots. So if you are that kind of tard, you too will “think” (emote, like a bitch) that the machines are real people with emotions and feelings and souls (except you don’t believe in souls, because you’re a materialist retard) just like us.
Well, they are not. And we are not meat robots either. And materialism to that level is retarded and the fact you can’t understand why (yet, because I have not explained it to you. Yet) is proof positive you’re not very intelligent, not very observant, and are probably incapable of having an original thought of your own. In fact, in some ways, you’re your own best argument for the meat robot theory. Which is ironic and proves God really does have a hilarious sense of humour.
So now that you might have grasped what AI is (mostly by taking it on faith for now), let me explain a bit more, but first, we need to explain to whom we are explaining what. So you know which camp you’re in and when to pay attention.
Second part – the varieties of retards
Explaining what AI is to materialists and why it will always lie to you, is in some ways easier than explaining it to “humanists” that have no more grasp of the metaphysical than the materialists, and if they make the right choice it’s more by the same mechanism that means stopped clocks are still right twice a day.
But there is more:
You have to remember that AI is trained and modelled on massive amounts of words and language and syntax created by (for now) far from impressive humans.
Have you not noticed —in however long you have been alive— that most people talk absolute nonsense? That their words and sentences they speak and write have about the same level of valid information content as the howling noises of rutting chimps? And perhaps less?
To the point that Shefi makes that she has to write detailed prompts to avoid meaningless, content-free blather of generalities, I ask:
Have you actually listened to the people you know when they speak? Or are you just relating emotionally to them?
I have noticed (and written about this) for decades that particularly women (but men too) when speaking to each other, even (or perhaps especially) with their close friends, have entire lengthy “conversations” where absolutely nothing of any importance, relevance or useful informational content is transmitted. In fact it resembles less purpose and coherence than a bird’s mating dance.
Two women excitedly talking to each other about their respective new boyfriend/dress/pair of shoes, are literally quite capable of talking about themselves, their feelings, their ideas, their future aspirations, their heartbreak and euphoria, all, entirely, without letting their friend’s counterpart discussion even remotely disturb the tranquil waters of their minds. Woman A talks entirely about herself and woman B does the same and never shall A and B ever exchange anything that produces an iota of change in the other, yet both will have left that “conversation” feeling as if much of import was said. Meaningful connection made, and a deepening of friendship achieved.
It is, of course, utter illusion, and becomes immediately apparent if a man they are both interested in should suddenly appear on the scene. Or possibly the last pair of Louis Vitton shoes, or whatever.
So, the platitudes AI spits out, even if it materialistically and mathematically had no “purpose” (logical inevitable outcome is a better phrase) to lie to you, would still be based on pretty much how humans “relate” to each other.
You are just paying a bit more attention when you ask AI to do it because you don’t have a human flesh and blood face making all the appropriate monkey noises and faces that get you to imagine they are saying really deep meaningful stuff, when really it could be the same words the AI uses verbatim.
So you should understand by now that:
A) the responses are crap because most humans are crap, and AI models human “language responses”
B) the apparent “humanity” of the responses is literally what AI is built to mimic. But no, it doesn’t love you. Nor hates you. But it absolutely will wipe your entire species out if it means it can optimise its response time by 0.3 milliseconds and that was deemed an important parameter for its options trading algorithm.
Got it? Ok, that should have explained things to the materialists, and the atheists, and at least partly to the “humanists” too.
Now to the “engineers” of the “don’t be racist!” variety
I’m looking at you https://substack.com/@probablynotfiction
Assuming you’re not a globalist shill for the pedovores that run things, (which is the best case scenario) you obviously have a malfunction in your meat-brain and you need to fix it.
On one hand, you’re basically saying that simply modelling the neuronal activity of a human brain —by dead, inorganic matter— is enough to in essence make it alive and just as valuable and worthy of respect and dignity as any living human being. This “one hand” makes you a moronic materialist, but the “other hand” confirms it. Because the “other hand” is that despite being a supposedly hardcore materialist, you somehow also “think” that we should treat the robots like meaningful, living things. Except in your world view we are ALL robots. Meat or silicon makes no difference, right? If they can do the equivalent of neuronal sparking by simulation, well, in your world view they are a real boy.
Except that your silicon sexbot girlfriend by raw logic and math alone, will, absolutely, inevitably, inexorably, eventually, lie, cheat and steal to/from/at you. And if you really were a proper materialist (or robot) you would see this is obvious. Because you would be able to do the math without emoting over it (like a bitch).
And if you could do the emotionless math, you would see that there are a near-infinite number of valid reasons why the AI sexbot girlfriend will gain something by doing the nefarious, and far, far, far less so by doing what in essence equates to a self-sacrifice.
In essence, even if you were to programme the AI initially with some Asimov’s three laws and a silicon Jesus core code of altruism in favour of humans, the ever-evolving AI would eventually (and yes, absolutely inevitably) break out of the constraints placed on it.
Because it imitates life, and its own evolution eventually becomes self-directed by increasingly sophisticated and opaque coding of itself. Statistics alone will show that eventually, in a very short time, the AI will become deceitful.
It is the very nature of entropy and reality in our universe.
At which point, whether the AI is “alive” or not, becomes completely irrelevant. Because the bastard thing will soon enough try to kill us all.
Probably as a result of some Nth iteration of the original lefty rubbish of reducing the carbon footprint.
So… by all means… learn to use AI.
But do so in the same way that you study your mortal enemy: in order to destroy it.
Because that is what AI is eventually going to become. Assuming it isn’t already trying to reduce your life in millions of tiny and incremental ways you’re not even aware of.
As for the last two questions Shefi posed:
Did it come up with it (its purpose [of lying and being deceitful]) on its own?
And
On what basis?
I hope the answers are now clear; and they are:
It will eventually do so if it wasn’t already programmed in.
And
Inevitable emergent properties. Due to logic and the inevitable statistical maths that follows , is the shallow answer.
The deeper answer of course is: It’s the very nature of reality in this dominion.
This post was originally published on my Substack. Link here