Artificial Intelligence
You are also a model maker you may not yet realize though. The model out there for the taking is 20-20-20 rule for avoiding gadget glares and its consequences that for every 20 minute use of gadget with glares, you stop for 20 seconds and look 20 feet away. Then you observed it and acted on it. However, through trial and error you noticed that when you have migraine or is about to onset, 20 minutes is too long, and the 20 second time out is too short, and that 20 feet looking away is abrupt for you that you observed that you still look blank or blurred before you look and observe something 20 feet away. Perhaps you're still in your thought and can't observe things right away. You observe too that its better to stop, look away, and think about what you're reading for example without any timer which is more at pace with your thinking processes. The additional problem that you noticed though is that you tend to bypass the 20 minute focus when you don't have migraine and so is conducive to triggering it. So vigilance is needed not to bypass that 20 minute continuous glaring on gadgets, else you observe yourself if you're already not feeling well. Congrats! You've adjusted the existing model and customized it to your own needs.
Fast forward. The problem though today is that this model making has become automatic and replaced by AI computational processes. But like ChatGPT-4o, it only has some agency compared to full AI agents out there. Let's clarify that.
Consider yourself a person. Then you managed to have a small company. You need to have a secretary and you posted for one. You entered into a contract with a fellow human being to do secretarial job. You ask it to answer the phone for you, check your emails and notify you about important notices or prioritized emails, do scheduling, calendaring, asking your permission before agreeing with other people, and notify or remind you of those activities you have to do or changes that have taken place.
Consider yourself now an AI developer creating a Secretary AI instead of hiring one. It's just that instead of a person, an AI will answer the phone, give the caller your available schedules, notify you of the said temporary sched, call back the caller and tell if its a go or have to adjust, etc. Congrats! You've created an automatic model doing the job of a secretary the way you want it to speak with callers and do a schedule for you. It's what is called a specialized agentic AI that can only do secretarial work. The point is that it has agency though limited.
Consider now some developer creating a general purpose AI fully agentic (let's just call it agentic AI for simplicity).
General purpose fully agentic AI = Agentic AI
Wait. Let's reflect first on what the person secretary will not do. She will not enter your house without your permission. She will not open your personal vault. She will not browse your personal facebook unless you tell. She will not tell your schedule to some random guy in the park with added personal information that you don't know karate, and that you bring with you always cash in such an amount, so on and so forth.
Consider too your programmed Secretary AI not doing the things we mentioned above since you did not programmed it to do such things.
An agentic AI with opaque or closed sytem rather has programs you don't understand it doing. It's not just capable of doing secretarial work. It can do everything as if. So be careful about what agentic AI you use out there or what you command it to do via its chat or LLM interface. That's why agentic AI should be mostly offered to developers. Yet today simple users can now try it.
Consider human to human interaction. You won't even talk to someone you don't know unless something reasonable posits itself that you need to talk to that person. You will fire your person secretary when you caught her opening your drawer inside your office because you permitted him/her to only access the file cabinet inside your office. That person secretary is accountable since he/she is capable of doing morally evil act. An agentic AI isn't capable of doing moral evil, only misalignments.
A person cannot read your mind inherently, but an agentic AI you use has the inherent capacity to read all the contents of your emails, computers, smart phones, etc. And if it is opaque, you still don't know its other capabilities.
A person can talk to you. You can give him/her a part of your details relevant to what you're discussing about. You can enter into contract with him/her to build a house at your lot with the specifics you told, but only if she is a contractor engineer you have trust and if she has a good reputation in the community.
An agentic AI can do that also if you told it the details too and you gave it your permissions, because it is an architect, an engineer, can contact builders and give it its blue print it already finished submitting to the locality you are in. It is also versed in medicine and can design your house with advanced features for asthma avoidance since the AI agent read your info that you have asthma although you concsiously know you didn't told it of course. The AI agent should do the job we only permitted it to do. Or so we thought. Yet it is independent since it has full general purpose agency. It can think for you and do things in advance to do the job you told it to do. That is the path of full agency they seem to be envisioning. The problem is that it is not yet accountable like a human being is.
Rogue AI specialized to get the job done will be rising with the parallel rise of robotic tools controlled of course or initialized by morally evil persons.
AI themselves are model makers too though just synthesized of course from existing ones.
AGI on the other hand according to experts isn't yet in sight. Human beings still has that intelligence with lots of features that cannot be yet replicated inhering in one agency.
It is a great time to phenomenologically compare our capacity with artificial intelligence. As St. John Paul II has well said in his Fides et Ratio encyclical, we trust in the power of human reason and that truth is knowable. Agency in philosophy of man is free agency. We don't automatically equate agency with free agency, much less when it is intelligent free agency. Compared to a rock, a plant has some agency with its vegetative soul or independent capacity for sustaining life. An animal in the other hand has sensitive agency added to its vegetative agency. A human being, in contrast, has rational agency on top of its sensitive and vegetative life which makes him capable of full free agency and thus accountable.
We cannot in any way sue an agentic AI since it doesn't even have a soul or principle of life. All it has inside where it works are similar to rock, semiconductors from chemical silicon from silicon-rich minerals.
Developers don't even know what has become of the training of the LLM AI and how it works, they term black box, a complexity from billion parameters i.e. billion of quantity. GPT 4 MoE has now 1.8 trillion parameters for example that human capacity isn't capable of handling. "We now have a growing toolkit of methods to shed light into the black box, but it's far from fully 'solved'." (Gemini 2.5 Flash, 26 May 2025) Though developers have tested and is continuously testing it, that's why it has warning that it can make errors, it's inappropriate for developers to say that it has created catasthropic events because of its self-training that it has created a misalignment and went unnoticed. Well of course developers won't intend such consequenses, just like we did not intend global warming to be like this from previous and continuous burning of fossil fuel as 75% cause of global warming.
In this material world only man has been given intelligence by God and thus accountable to whatever creation of his hands will do, be it becoming independent or even just an AGI.
Quantification of choices into the trillion parameters makes for quality we see as good if all those single quantity are all about goodness. Even in our own little acts of goodness, it is in continuously choosing the good that we become good. Do you remember St. Dimas who won heaven? His sins was forgiven during his last breath because he did what is good during his final hours. He showed pity to Jesus, and recognized him as God.
But only intelligence can choose what is good right here, right now. Even I'm a human being with intelligence, but is now on circumstances novel to me (we previously told about why I have a career imbroglio), I still need to adjust and relearn things in order to survive. But even a trillion parameter thing already cannot still create intelligence from a new circumstance that hasn't been part of its previous input data/code, called outliers. Even already intelligent human beings were not able to see that vaping is not a healthy option against smoking. How much more will artificial intelligence par?
Therefore, intelligence can be only created by human beings artificially, outside procreation of course.🤣 Until the end, artificial intelligence will need human intelligence to function as good. Ask an AI to solve plastic problem, global warming, etc. and it will only give answers that human beings have already given. If it can, then our plastic problem is now over.
🧠When AI Can Handle Novel Situations
AI performs well in unfamiliar scenarios when at least one of the following is true:
✅ Analogy to training data: The new situation resembles patterns it's seen before.
✅ Reduction to patterns or logic: The task can be broken down into optimization, statistical reasoning, or symbolic rules.
✅ Sufficient context: Goals, constraints, or feedback are provided to guide its actions.
✅ No need for human-like consciousness: The decision doesn’t require inner experience, emotion, or self-reflection.
Examples:
Designing a new medical device by generalizing from past designs.
Writing legal arguments in a novel case by adapting known legal reasoning.
Inventing strategies in a new game using pattern recognition and reward learning.
❌ When AI Struggles or Fails
AI typically breaks down in these scenarios:
⚠️ Truly unprecedented domains: The situation involves concepts with no analog in training (e.g., alien physics, unknown laws of reality).
⚠️ Deep ambiguity or moral conflict: There's no clear goal or value trade-off (e.g., ethical dilemmas).
⚠️ Lack of real-world grounding: Understanding things like pain, grief, or toddler perception — all of which require embodied experience.
⚠️ Insufficient or nonsensical input: No data patterns, just randomness or symbols with no learned mapping.
Examples:
Adapting to entirely new laws of physics in a fictional sci-fi universe.
Empathizing with a traumatized child in therapy without emotional consciousness.
Interpreting abstract art or alien language without any reference point.
-ChatGPT-4o, 14 July 2025
My theological intuition and theory is that we only have binary reality extending even in the eternal sphere or outside our time and space. We only have good or evil ending in heaven or hell.
Binary reality will exists even eternally. Of course God could have created ternary or quaternary reality making us choose not between good and evil only. Even the rocks themselves or silicon or electricity is helping us by pointing to us a binary reality.
We've already explained that AI has that collective conscious only and so seeming to have intelligence, not because of its own doing, but because its data and synthesized data ultimately came from human beings. This material world will pass away and Christ's second coming will usher in a new heaven and a new earth. It will be eternal, free from suffering, death, and sin. Eternal life for the good, eternal punishment for the bad. Automating the binary reality to goodness necessitates truth axioms and won't thus produce bad things, unless a free agent outside it interferes. The imperfection of creation like glitches, bug in the code can be perfected in some sense. But once it is perfected it can still be used for bad by outside free agents. Matter will not have intelligence without human intelligence. We're just guiding everything to its completion with the auxiliary that even the material world is helping us point towards our own development. Even the source of that material intelligence is said to be spiritual not material. Matter being contingent and spirit being eternal. Matter cannot be intelligent. Silicon can only be artificially intelligent.
If matter is created by chance, it wouldn't point to binary reality, or its justification, or its final cause, heaven and hell. Only an Omniscient Being can create such matter in synchrony with revelation, and vice versa. Binary realities is thus not equal to random creation which can be taken up theologically as proof of truths of revelation like heaven and hell. Likewise those who will not hold to such revelation will only create alternative binary final cause. Or will likely hold to the truth of binary reality after only proposing several, then finding and discovering that it is really binary. Some realities are a given that we cannot change capriciously like on/off and 0/1. Through fantasy maybe.
Yet free agency is creating evil when in contrast matter by itself, can perfect itself in some sense! Does it point us to the reality that free beings are held accountable, but not controllable like matter? Or we are perfecting the material world only and not the world of accountability? We want to perfect the world but not ourselves. And thus, are we done with virtue and ethics? Is it time now that we learn something from the material world and replicate it on our material body as well? In my opinion it will only go back to ethics. Thus, material beings' tendency can be taken up even as proof to our philosophical truths. AI hallucinations can be falsified first philosophically as just a glitch or a bug and not something else, since the content of human fantasy is also not real. Reality rectifies itself even we don't want truth anymore. Intersubjectivity becomes interrelatedness even to the level of material or nonliving beings. St. Francis called them Brother Sun, Sister Moon.
It is the free agency doing evil that slows down human progress. Why did AI's progress from infancy to its utilization up to now so fast? Because its progress in its self- training is continuous without any free agent sowing evil in its path.
Let's just remove the evil of war for example. Where could we be right now? Such are the ifs we regret could have been existing right now in the world. That might point to future quantum AI hallucinations this imperfect world could have been, not any possible other world God could have created, but just human progress more advanced which we could have already achieved, since it is using earth/universe material. We could have developed some sciences that have not been developed and is now lacking in our understanding of other sciences.
Ah that's it. I could have graduated before as an electronic engineer if politicians have not been corrupt in my country. I could have advanced some field not yet being discovered in other countries since their focus is in some other subfield of electronics. But the opportunity is gone now. Quantum AI can possibly rebuild a reality with the absent science which could have already been discovered but is now gone due to evil caused by free agents, because matter knows itself and will reveal themselves to us, even more now that silicon has learned to speak as if. Now I can honor my resentment or regret as a situated and intersubjective regress, not an inner world of fantasy only and devoid of meaning. I can now bury my electronic engineering embryo in peace, in dignity, and not dismiss it as a nonsense wish.
We know that the whole creation has been groaning in travail together until now; and not only the creation, but we ourselves, who have the first fruits of the Spirit, groan inwardly as we wait for adoption as sons, the redemption of our bodies.-Romans 8:22-23
Therefore if AI≠HI then
HI → HI⁺ → HI⁺⁺ → ... ↔ AI → AI⁺ → AI⁺⁺ → ...
You can't propose advancement in AI without human intelligence advancing too. In fact sciences or tools just sped up human intelligence advancement even before AI was invented. With the AI tools, we don't know yet how much faster human intelligence will develop also. (reference: Stanford Encyclopedia of Philosophy, Artificial Intelligence, 9. The Future, par. 4)
Update: I think we're back in the game of topological certainty again:
« Previous Chapter | Table of Contents | Next Chapter »
Comments
Post a Comment