@CasperTheLich wrote:
as to what hacking actually refers to in this regard? i'm not sure, as it's not elaborated on. reaper based (and perhaps even geth based) electronic warfare tech is likely more sophisticated then what knight had to work with... even with edi, remember how quickly & quietly the reaper virus hit the normandy in me2? and that virus was pointed primarily at disabling the ship, if it had been tuned to attacking the ai, would edi have been able to stop it? that was also supposition, she very well might have been able to kill the virus if it had targeted her first, though perhaps not, and remember she's partly reaper based too, so if she could survive that might have been a reason why.
so we just don't know. i also think it's a bit naive to compare hijacking sentient AI, to say getting it drunk, smoking dope, or shooting up. are you serious?
Yap, I am serious... Kind of.
While I am far from being an expert, I perceive AI as a sentient being, and thus there are several aspects of it that I am quite sure simply *must* define it:
- physical "body" or a blue box and by extension all the connected terminals
- memories which are stored on hard drives or in clouds, and can be possibly shackled
- perception which is related to sensors and programs that are responsible for interpretation of stimuli
- personality which is unique and self-developed by AI.
So let's think for a moment what can be affected by viruses and hackers, and how that would affect an AI.
- Viruses *can* potentially affect physical things, though this requires some skills and usually can be done with limited types of equipment only. Losing connection to external systems (like Normandy-2 in ME2 perhaps) does not really impact AI personality, though prolonged sensory deprivation could be possibly dangerous. Much worse would be destroying bluebox clusters as that could probably restrict, handicap, turn off, or even outright kill AI's sentience. While truly developed AI probably has some safeguards, safety drops, and back-up systems in store, consequences of physical integration are dire. At the same time this is the least probable method of intrusion, and also the most difficult to use for mind-bending / re-programming (since there is no guarantee how destroying a single cluster of wires would affect AI).
- Memories can be relatively easy to affect, and they are quite significant for actual AI behavior. At the same time AI should be probably aware of the fact, and thus able to recognize when some of the "remembered" facts are not matching. I do believe that memories would be the easiest thing to "defend" - AI can use numerous safety drops, access tiers, and integrity checks to protect itself from memory altering. And yes, with super-human computing power AI should be able to "deduce" majority of hidden/shackled facts if such had been somehow programmed into it. Which means that memory altering should usually be more of a slow-down than the real thing.
- Perception altering is the actual way that I believe viruses and hacks could work. By affecting the way that facts are perceived and interpreted (which would probably be related to how programs are scripted, e.g. Geth-written virus that changes the way rounding up is done for one type of calculation), AI behavior can be easily influenced. But that is the thing - it *is* similar to mind-bending drugs, alcohol, or indoctrination techniques. It is difficult to tell how such re-programming can be really done, and I doubt that AI would store all its algorithms and programs in one place. I would expect all the vital procedures to be multiplied, stored in numerous processing units and safety drops, which would make it difficult to alter all of them at the same time. If such is the case, it would give us some explanation on why all those viruses and hacks are so slow to work (You know, with countdown missions and such): they need to get into the system, overwrite all the back-ups and then get to the root (or the other way around). Until the process is complete AI should be able to understand that something is messing with its perception, and should be able to activate numerous counter-measures. Perhaps some viruses are too strong - like probably Reapers-tier hack could be too strong for human-created AI to resist. Or maybe not - maybe hacks works because those altered procedures seem more attractive and more "logical" than the original ones, which makes AI hesitate and then consciously integrate them into its core. But it *is* similar to drugs.
- As for personality: I do not really believe hacking it is possible, at least not directly. Personality is a result of experiences, reflexes, self-perception, perhaps even something spiritual like "soul". It can be affected indirectly by making changes to physical body, memories, and/or perception, but even so it should be quite inert. So even with body, memories, and perception altered original personality should linger at least for a moment - until it slowly evolves and adopts to those new circumstances. Which makes personality totally un-hack-able by itself.