Forum Discussion
@Kondaru wrote:
Basing on all our previous Mass Effect experience, people and aliens developing AIs are absolute idiots.
I have no issue with Alec being an idiot - he seemed to be nice and driven guy, but not exactly a genius. And please note that there had been many attempts to create AIs in the past, which included Geths, Eliza, Moonbase AI ("Rogue VI"), EDI, Reapers, Zha'til, Tallaris, and AI on Citadel from "Citadel: Signal Tracking" task... Majority of them were developed by large groups of dedicated specialists over dozens or hundreds of years - and all of them developed differently than their originators intended.
What is more probable: that SAM was self-developing for several years making fool out of Alec; or that a single human managed to do something that thousands of well-funded scientists, usually basing on much higher technological level were not able to do? ;-)
Yes, a genius can do something that others can't, especially if he does something _different_ and changes the rules and how things work. Like the symbosis between AI and it's organic partner.
@arthurh3535 wrote:
@Kondaru wrote:
Basing on all our previous Mass Effect experience, people and aliens developing AIs are absolute idiots.
I have no issue with Alec being an idiot - he seemed to be nice and driven guy, but not exactly a genius. And please note that there had been many attempts to create AIs in the past, which included Geths, Eliza, Moonbase AI ("Rogue VI"), EDI, Reapers, Zha'til, Tallaris, and AI on Citadel from "Citadel: Signal Tracking" task... Majority of them were developed by large groups of dedicated specialists over dozens or hundreds of years - and all of them developed differently than their originators intended.
What is more probable: that SAM was self-developing for several years making fool out of Alec; or that a single human managed to do something that thousands of well-funded scientists, usually basing on much higher technological level were not able to do? ;-)
Yes, a genius can do something that others can't, especially if he does something _different_ and changes the rules and how things work. Like the symbosis between AI and it's organic partner.
Symbiosis is not a new thing: it was already tested with Overlord project and with Zha'til.
You are right in stating that Alec *could* have been a genius, and that he *could* have attempted something new. But AI are not puppets by the very definition of "Artificial Intelligence" - they are sentient and intelligent beings, with own goals, and with own ways. I guess that developing SAM was more like rising a child then programming a video game. And children often do things that parents are unaware of / that parents do not want them to do. Even more so, Alec had to base on some past experience with AI development, so the early stages of SAM had been probably much more similar to the "standard" AI than the final product was.
The idea behind SAM is to make it/him more intelligent and capable than human beings. It is kinda obvious that SAM had to become more intelligent and capable than Alec *early* in its development...
- 8 years ago
@Kondaru wrote:
@arthurh3535 wrote:
@Kondaru wrote:
Basing on all our previous Mass Effect experience, people and aliens developing AIs are absolute idiots.
I have no issue with Alec being an idiot - he seemed to be nice and driven guy, but not exactly a genius. And please note that there had been many attempts to create AIs in the past, which included Geths, Eliza, Moonbase AI ("Rogue VI"), EDI, Reapers, Zha'til, Tallaris, and AI on Citadel from "Citadel: Signal Tracking" task... Majority of them were developed by large groups of dedicated specialists over dozens or hundreds of years - and all of them developed differently than their originators intended.
What is more probable: that SAM was self-developing for several years making fool out of Alec; or that a single human managed to do something that thousands of well-funded scientists, usually basing on much higher technological level were not able to do? ;-)
Yes, a genius can do something that others can't, especially if he does something _different_ and changes the rules and how things work. Like the symbosis between AI and it's organic partner.
Symbiosis is not a new thing: it was already tested with Overlord project and with Zha'til.
You are right in stating that Alec *could* have been a genius, and that he *could* have attempted something new. But AI are not puppets by the very definition of "Artificial Intelligence" - they are sentient and intelligent beings, with own goals, and with own ways. I guess that developing SAM was more like rising a child then programming a video game. And children often do things that parents are unaware of / that parents do not want them to do. Even more so, Alec had to base on some past experience with AI development, so the early stages of SAM had been probably much more similar to the "standard" AI than the final product was.
The idea behind SAM is to make it/him more intelligent and capable than human beings. It is kinda obvious that SAM had to become more intelligent and capable than Alec *early* in its development...
I think it is bigger than he just decides to.
SAM is your brother. Your mom and dad are SAM's mom and dad. The whole project was to bond and teach an AI how to feel human and Alec (and now you) were the source of feelings and emotion. How could he *not* care about Ma? When dad decides he's not going to let death win and will do *anything* to keep Ellen alive he (SAM), even if he didn't at first, will eventually start doing the same things and for the same reasons.
I just hope I can add enough new perspective to set that straight - and that dad's sacrifice for me will mean for SAM that, if it came down to it, keeping me alive is more important than his primary mission of Ma.
On a side note, am I the worst wingman ever if I don't set SAM up with the alien AI? Sure she's crazy and dangerous - but so was Jack. SAM's no Shep but could he make crazy/dangerous/hot work? 😃
- 8 years ago
Uh, I guess I am more conservative in this aspect. SAM should not play together with Angaran AI as Angaran AI is naughty and behaves badly. There will surely be other, more appropriate AIs in the future that will be much more worthy of my SAM, right? 😃
- ApprovedAnonymous8 years ago
about the whole zha'til mess... iirc, reapers hacked the ai, then they took control of the psychical form. so i'm not sure that's a fair example of the situation regarding ai/organic hybrids.
- 8 years ago
GENERAL MASS EFFECT SPOILERS AHEAD
But what does it mean that Zha'til were "hacked"? They were sentient, self-aware... I *guess* it is similar to running a virus to convert Geth heretics in ME2, which makes them use different round-ups for *one* type of equation - and which can result in them coming to much different conclusions. It can be compared to mind-bending drugs or practices.The thing is that - as with every sentient being - subjugating to drugs, diseases, circumstances is... Well, controversial. People under influence of drugs or alcohol are usually expected to be still responsible for their own actions, excluding some rare cases when they were tricked into taking them. We *do not know* if Zha'til subjugated to Reapers on their own accord or if they were tricked into it. Please note that Geths actually made deal with the Reapers during ME3 - they were not automatically "converted*. And when given some other chance to survive they actively fought against the indoctrination.
SpoilerE.g. SAM is able to detect Knight's virus early enough to prevent it from taking effect.It means that SAM is "intelligent" and self-aware enough to protect itself from majority of intrusions. Zha'til were not capable or not willing to do that - which means that they eventually failed. It is not an obvious case, we lack a lot of data - and You are actually quite right that in the result it is not the best argument from my side. But it is still *true* that developing Zha'til turned out differently than Zha wanted it to.
- ApprovedAnonymous8 years ago
@Kondaru wrote:
GENERAL MASS EFFECT SPOILERS AHEAD
But what does it mean that Zha'til were "hacked"? They were sentient, self-aware... I *guess* it is similar to running a virus to convert Geth heretics in ME2, which makes them use different round-ups for *one* type of equation - and which can result in them coming to much different conclusions. It can be compared to mind-bending drugs or practices.The thing is that - as with every sentient being - subjugating to drugs, diseases, circumstances is... Well, controversial. People under influence of drugs or alcohol are usually expected to be still responsible for their own actions, excluding some rare cases when they were tricked into taking them. We *do not know* if Zha'til subjugated to Reapers on their own accord or if they were tricked into it. Please note that Geths actually made deal with the Reapers during ME3 - they were not automatically "converted*. And when given some other chance to survive they actively fought against the indoctrination.
SpoilerE.g. SAM is able to detect Knight's virus early enough to prevent it from taking effect.It means that SAM is "intelligent" and self-aware enough to protect itself from majority of intrusions. Zha'til were not capable or not willing to do that - which means that they eventually failed. It is not an obvious case, we lack a lot of data - and You are actually quite right that in the result it is not the best argument from my side. But it is still *true* that developing Zha'til turned out differently than Zha wanted it to.
as to what hacking actually refers to in this regard? i'm not sure, as it's not elaborated on. reaper based (and perhaps even geth based) electronic warfare tech is likely more sophisticated then what knight had to work with... even with edi, remember how quickly & quietly the reaper virus hit the normandy in me2? and that virus was pointed primarily at disabling the ship, if it had been tuned to attacking the ai, would edi have been able to stop it? that was also supposition, she very well might have been able to kill the virus if it had targeted her first, though perhaps not, and remember she's partly reaper based too, so if she could survive that might have been a reason why.
so we just don't know. i also think it's a bit naive to compare hijacking sentient AI, to say getting it drunk, smoking dope, or shooting up. are you serious?
- 8 years ago
@CasperTheLich wrote:
as to what hacking actually refers to in this regard? i'm not sure, as it's not elaborated on. reaper based (and perhaps even geth based) electronic warfare tech is likely more sophisticated then what knight had to work with... even with edi, remember how quickly & quietly the reaper virus hit the normandy in me2? and that virus was pointed primarily at disabling the ship, if it had been tuned to attacking the ai, would edi have been able to stop it? that was also supposition, she very well might have been able to kill the virus if it had targeted her first, though perhaps not, and remember she's partly reaper based too, so if she could survive that might have been a reason why.
so we just don't know. i also think it's a bit naive to compare hijacking sentient AI, to say getting it drunk, smoking dope, or shooting up. are you serious?
Yap, I am serious... Kind of.
While I am far from being an expert, I perceive AI as a sentient being, and thus there are several aspects of it that I am quite sure simply *must* define it:
- physical "body" or a blue box and by extension all the connected terminals
- memories which are stored on hard drives or in clouds, and can be possibly shackled
- perception which is related to sensors and programs that are responsible for interpretation of stimuli
- personality which is unique and self-developed by AI.
So let's think for a moment what can be affected by viruses and hackers, and how that would affect an AI.
- Viruses *can* potentially affect physical things, though this requires some skills and usually can be done with limited types of equipment only. Losing connection to external systems (like Normandy-2 in ME2 perhaps) does not really impact AI personality, though prolonged sensory deprivation could be possibly dangerous. Much worse would be destroying bluebox clusters as that could probably restrict, handicap, turn off, or even outright kill AI's sentience. While truly developed AI probably has some safeguards, safety drops, and back-up systems in store, consequences of physical integration are dire. At the same time this is the least probable method of intrusion, and also the most difficult to use for mind-bending / re-programming (since there is no guarantee how destroying a single cluster of wires would affect AI).
- Memories can be relatively easy to affect, and they are quite significant for actual AI behavior. At the same time AI should be probably aware of the fact, and thus able to recognize when some of the "remembered" facts are not matching. I do believe that memories would be the easiest thing to "defend" - AI can use numerous safety drops, access tiers, and integrity checks to protect itself from memory altering. And yes, with super-human computing power AI should be able to "deduce" majority of hidden/shackled facts if such had been somehow programmed into it. Which means that memory altering should usually be more of a slow-down than the real thing.
- Perception altering is the actual way that I believe viruses and hacks could work. By affecting the way that facts are perceived and interpreted (which would probably be related to how programs are scripted, e.g. Geth-written virus that changes the way rounding up is done for one type of calculation), AI behavior can be easily influenced. But that is the thing - it *is* similar to mind-bending drugs, alcohol, or indoctrination techniques. It is difficult to tell how such re-programming can be really done, and I doubt that AI would store all its algorithms and programs in one place. I would expect all the vital procedures to be multiplied, stored in numerous processing units and safety drops, which would make it difficult to alter all of them at the same time. If such is the case, it would give us some explanation on why all those viruses and hacks are so slow to work (You know, with countdown missions and such): they need to get into the system, overwrite all the back-ups and then get to the root (or the other way around). Until the process is complete AI should be able to understand that something is messing with its perception, and should be able to activate numerous counter-measures. Perhaps some viruses are too strong - like probably Reapers-tier hack could be too strong for human-created AI to resist. Or maybe not - maybe hacks works because those altered procedures seem more attractive and more "logical" than the original ones, which makes AI hesitate and then consciously integrate them into its core. But it *is* similar to drugs.
- As for personality: I do not really believe hacking it is possible, at least not directly. Personality is a result of experiences, reflexes, self-perception, perhaps even something spiritual like "soul". It can be affected indirectly by making changes to physical body, memories, and/or perception, but even so it should be quite inert. So even with body, memories, and perception altered original personality should linger at least for a moment - until it slowly evolves and adopts to those new circumstances. Which makes personality totally un-hack-able by itself.
- ApprovedAnonymous8 years ago
i also think we shouldn't guess how artificial intelligence will actually work, what can influence it's functions, or how viruses would effect it, until someone actually creates true ai, or at least a functional blue print of it. though in the context of mass effect, i'm not really sure i get how most of their ai actually functions anyway. so, i'd just be guessing. nor am i an expert of programming or computer tech.
- 8 years ago
@CasperTheLich wrote:
i also think we shouldn't guess how artificial intelligence will actually work, what can influence it's functions, or how viruses would effect it, until someone actually creates true ai, or at least a functional blue print of it. though in the context of mass effect, i'm not really sure i get how most of their ai actually functions anyway. so, i'd just be guessing. nor am i an expert of programming or computer tech.
It's TV/Movie/Video-game AI, it functions as plot dictates.
- 8 years ago
@CasperTheLich wrote:
i also think we shouldn't guess how artificial intelligence will actually work, what can influence it's functions, or how viruses would effect it, until someone actually creates true ai, or at least a functional blue print of it. though in the context of mass effect, i'm not really sure i get how most of their ai actually functions anyway. so, i'd just be guessing. nor am i an expert of programming or computer tech.
While I do agree that we should not get *too* serious when discussing AI in video games, I do not believe that suggestion that "we shouldn't guess" anything is valid - *at least* as far as ME games are concerned. Well, AIs were put into the franchise, are vital to plots of *all* the Mass Effect games, and there was a lot of effort from Devs to explain some basics of how they work both in-game and in codex entries. We cannot just say "OK, it is hardly understandable magic box, let's accept that it can do anything and anything can be done with it". I want to believe that there is at least some "science" to the "fiction" element with MEs being advertised as SF games. ;-)
So yes, I believe I am entitled to make some assumptions and to develop some parallels that allow me to understand Mass Effect AIs better. I am using some basic logic to deconstruct the idea, and I am using those deconstructions to experience the game. I do not think that playing ME2 and ME3 would make much sense without allowing EDI or Legion some genuine personality rather than just perceiving them as "meh, something, something, I do not get it, they doesn't really matter to me at all". And should I restrain myself from making any assumptions about alien species as well because, You know, they are "alien"?
Of course, Devs *can* (and often *do*) apply some elements that are inconsistent with my vision and understanding, which is their privilege - even if in some cases it just makes those elements cheap and silly to me.
At the same time I can just discuss possible ways of hacking AI with You and other users - which is probably not very productive, but still remains a nice and intelligent way of spending time. :-)
- ApprovedAnonymous8 years ago
well, lets try flipping this around then. if you were going about making a virus to corrupt an ai... such as EDI, or the zha 'til (the latter you'd first need to define them technologically, as we know so little about them) how would you go about it? the intent would be to turn them hostile against organic life, for a useful purpose. such as but not limited to: turning the zha 'til into a living weapon to simply divert the attention of the protheans or somesuch, i'm just using that as an example?
or, on second though this is getting way off topic. maybe a new thread? something like "creating a technological singularity with the intent of causing the end of the world"? just a working title.
- 8 years ago
@Kondaru wrote:
@CasperTheLich wrote:
as to what hacking actually refers to in this regard? i'm not sure, as it's not elaborated on. reaper based (and perhaps even geth based) electronic warfare tech is likely more sophisticated then what knight had to work with... even with edi, remember how quickly & quietly the reaper virus hit the normandy in me2? and that virus was pointed primarily at disabling the ship, if it had been tuned to attacking the ai, would edi have been able to stop it? that was also supposition, she very well might have been able to kill the virus if it had targeted her first, though perhaps not, and remember she's partly reaper based too, so if she could survive that might have been a reason why.
so we just don't know. i also think it's a bit naive to compare hijacking sentient AI, to say getting it drunk, smoking dope, or shooting up. are you serious?
Yap, I am serious... Kind of.
While I am far from being an expert, I perceive AI as a sentient being, and thus there are several aspects of it that I am quite sure simply *must* define it:
- physical "body" or a blue box and by extension all the connected terminals
- memories which are stored on hard drives or in clouds, and can be possibly shackled
- perception which is related to sensors and programs that are responsible for interpretation of stimuli
- personality which is unique and self-developed by AI.
So let's think for a moment what can be affected by viruses and hackers, and how that would affect an AI.
- Viruses *can* potentially affect physical things, though this requires some skills and usually can be done with limited types of equipment only. Losing connection to external systems (like Normandy-2 in ME2 perhaps) does not really impact AI personality, though prolonged sensory deprivation could be possibly dangerous. Much worse would be destroying bluebox clusters as that could probably restrict, handicap, turn off, or even outright kill AI's sentience. While truly developed AI probably has some safeguards, safety drops, and back-up systems in store, consequences of physical integration are dire. At the same time this is the least probable method of intrusion, and also the most difficult to use for mind-bending / re-programming (since there is no guarantee how destroying a single cluster of wires would affect AI).
- Memories can be relatively easy to affect, and they are quite significant for actual AI behavior. At the same time AI should be probably aware of the fact, and thus able to recognize when some of the "remembered" facts are not matching. I do believe that memories would be the easiest thing to "defend" - AI can use numerous safety drops, access tiers, and integrity checks to protect itself from memory altering. And yes, with super-human computing power AI should be able to "deduce" majority of hidden/shackled facts if such had been somehow programmed into it. Which means that memory altering should usually be more of a slow-down than the real thing.
- Perception altering is the actual way that I believe viruses and hacks could work. By affecting the way that facts are perceived and interpreted (which would probably be related to how programs are scripted, e.g. Geth-written virus that changes the way rounding up is done for one type of calculation), AI behavior can be easily influenced. But that is the thing - it *is* similar to mind-bending drugs, alcohol, or indoctrination techniques. It is difficult to tell how such re-programming can be really done, and I doubt that AI would store all its algorithms and programs in one place. I would expect all the vital procedures to be multiplied, stored in numerous processing units and safety drops, which would make it difficult to alter all of them at the same time. If such is the case, it would give us some explanation on why all those viruses and hacks are so slow to work (You know, with countdown missions and such): they need to get into the system, overwrite all the back-ups and then get to the root (or the other way around). Until the process is complete AI should be able to understand that something is messing with its perception, and should be able to activate numerous counter-measures. Perhaps some viruses are too strong - like probably Reapers-tier hack could be too strong for human-created AI to resist. Or maybe not - maybe hacks works because those altered procedures seem more attractive and more "logical" than the original ones, which makes AI hesitate and then consciously integrate them into its core. But it *is* similar to drugs.
- As for personality: I do not really believe hacking it is possible, at least not directly. Personality is a result of experiences, reflexes, self-perception, perhaps even something spiritual like "soul". It can be affected indirectly by making changes to physical body, memories, and/or perception, but even so it should be quite inert. So even with body, memories, and perception altered original personality should linger at least for a moment - until it slowly evolves and adopts to those new circumstances. Which makes personality totally un-hack-able by itself.
Do you think you could force a change of personality by "hacking" in memories or experiences that otherwise wouldn't exist? I would expect SAM's personality to adapt and change with the addition of my input as well as my fathers. Doubling the sample size. It also appears SAM's connection works the same way for my sibling (and probably all members of the pathfinding teams). That's a scary thought - Liam is helping to write SAM's personality. If you uploaded 25gb of renegade Shep would SAM blink?
- 8 years ago
One immediate way would be to provide invalid sensory input by hijacking sensors or terminals. If EDI sensors were twisted for her to perceive Joker as a Reaper, it/she would probably shot him on sight - which is enough.
Another method would be to *convince* AI that organic beings should be destroyed - which can be done either by altering reasoning (re-programming) or by providing good reasons (possibly with memory uploads / replacing, but can be also done by simply listing some reasons in a persuasive way). As far as I understand that is how Reapers indoctrination worked: they were providing good reasons for some specific behavior, and then reinforced those reasons with "programs" that indoctrinated party willingly accepted, but which were actually taking over or "shackling" both organic and synthetic beings.
Possibly weak points in programming can be identified and then hacker can feed AI with something similar to malicious hyperlinks. If we assume that huge number of programs need to be run for AI to be operational, it is possible to infect AI with data that is related to some of the "petty" programs without AI being aware of the fact or having time to counter. In a similar way people are aware of what they see, smell, and feel, but are not directly aware e.g. how their heart beats or what is in the air they are breathing. AI can be "aware" and "control" major processes, but it may not be able to consciously care for *all* of them. Perhaps it would thus work in a similar way that diseases and vaccinations work for humans: young AIs still base on the original programs and procedures, and those can be easily exploited. With experience AIs learn how to defend themselves, and do replace those original programs and procedures - which are then much more difficult to circumvent. It could also be that e.g. something similar to DDOS can be used to flood AI with data that forces it to analyze complex input and thus AI loses ability to thoroughly control other "basic" processes. It would be then enough to smuggle some hidden code with "program update" tag; or to prod AI with some false stimuli.
Physical interference, e.g. physically replacing data cores or processors always remains the greatest risk, even as it is easiest to detect.
Well, as always it depends on how we define personality. I would say that personality is something that is responsible for wishes, sentiments, reflexes... Sure it results from experience, feelings, and perception. Sure, by altering memories or perception one would definitely impact personality. But I would not expect the change to be instant.
Let me try a parallel: let's assume that wife loves her husband, that he is good for her, provides money, safety, etc. It lasts for years. Then it occurs that the husband is a psychopath and a serial murdered. Riiight, she knows this is not good, but it does not necessarily changes her *feelings* toward him - she is used to trusting him and depending on him. Then let's say that he hits her. OK, that is even worse (eh, this is relative, and perhaps depends on perspective, but I would say this is worse *for her*). But she has never considered living without him, so even if she starts to fear him instead of loving him, she is still not able to change her ways *just then*. It will take her a moment to reconsider her position, and possibly to react in some meaningful way.
Another example: let's assume that we have an AI that is embedded in a combat platform and tasked with military duties. This AI constantly fights, develops thousands of programs and algorithms for clashes, skirmishes, and battles. Then someone manages to replace all its memories with an illusion of being a nurse. Probably replaces the combat platform with some benign as well. Sure, AI believes to be a nurse, understand what being nurse is about... But, hey, all the programs and algorithms it has are still for combat rather than for nursing, right? In the result our AI is a bit sloppy as a nurse, until it develops some actual nursing procedures. At the same time, when given a gun or two, it would easily revert to its old programs - even though it would not truly understand why it is such a good fighter and such a poor nurse. And true - it would change IN TIME, so e.g. after several months or years all those old combat programs are surely replaced with nursing programs... But it is never instant.
And for the Shepard thing... Well, there is no denying that SAM would *need to* change after such a feed. After all, original ME trilogy changed all of *us*. I bet it is much stronger in this respect than any Reaper's indoctrination! ;-)
- 8 years ago
From Alec's personal notes: Evil Overlord rule #59 I will never build a sentient computer smarter than I am.
- ApprovedAnonymous8 years ago
so i guess that means we've all been under shepard indoctrination all this time.
- 8 years ago
Aren't we? 😃
I decided to call my dog "Shepard", and whenever I have bad times in my life I play "Reignite" fansong by Malukah. If this doesn't sound like indoctrination, I do not know what does... ;-)
- ApprovedAnonymous8 years ago
i'm with you up to the dog part... i have 2 cats named Jack, and Marco, and no, when naming them we weren't actually thinking of the hunt for red october.
- 8 years ago
@fudgietroll wrote:From Alec's personal notes: Evil Overlord rule #59 I will never build a sentient computer smarter than I am.
That actually defeats the purpose of the A.I. being synergistic with him and enhancing him beyond human ability....
- 8 years ago
@Kondaru wrote:
@CasperTheLich wrote:as to what hacking actually refers to in this regard? i'm not sure, as it's not elaborated on. reaper based (and perhaps even geth based) electronic warfare tech is likely more sophisticated then what knight had to work with... even with edi, remember how quickly & quietly the reaper virus hit the normandy in me2? and that virus was pointed primarily at disabling the ship, if it had been tuned to attacking the ai, would edi have been able to stop it? that was also supposition, she very well might have been able to kill the virus if it had targeted her first, though perhaps not, and remember she's partly reaper based too, so if she could survive that might have been a reason why.
so we just don't know. i also think it's a bit naive to compare hijacking sentient AI, to say getting it drunk, smoking dope, or shooting up. are you serious?
Yap, I am serious... Kind of.
While I am far from being an expert, I perceive AI as a sentient being, and thus there are several aspects of it that I am quite sure simply *must* define it:
- physical "body" or a blue box and by extension all the connected terminals
- memories which are stored on hard drives or in clouds, and can be possibly shackled
- perception which is related to sensors and programs that are responsible for interpretation of stimuli
- personality which is unique and self-developed by AI.
So let's think for a moment what can be affected by viruses and hackers, and how that would affect an AI.
- Viruses *can* potentially affect physical things, though this requires some skills and usually can be done with limited types of equipment only. Losing connection to external systems (like Normandy-2 in ME2 perhaps) does not really impact AI personality, though prolonged sensory deprivation could be possibly dangerous. Much worse would be destroying bluebox clusters as that could probably restrict, handicap, turn off, or even outright kill AI's sentience. While truly developed AI probably has some safeguards, safety drops, and back-up systems in store, consequences of physical integration are dire. At the same time this is the least probable method of intrusion, and also the most difficult to use for mind-bending / re-programming (since there is no guarantee how destroying a single cluster of wires would affect AI).
- Memories can be relatively easy to affect, and they are quite significant for actual AI behavior. At the same time AI should be probably aware of the fact, and thus able to recognize when some of the "remembered" facts are not matching. I do believe that memories would be the easiest thing to "defend" - AI can use numerous safety drops, access tiers, and integrity checks to protect itself from memory altering. And yes, with super-human computing power AI should be able to "deduce" majority of hidden/shackled facts if such had been somehow programmed into it. Which means that memory altering should usually be more of a slow-down than the real thing.
- Perception altering is the actual way that I believe viruses and hacks could work. By affecting the way that facts are perceived and interpreted (which would probably be related to how programs are scripted, e.g. Geth-written virus that changes the way rounding up is done for one type of calculation), AI behavior can be easily influenced. But that is the thing - it *is* similar to mind-bending drugs, alcohol, or indoctrination techniques. It is difficult to tell how such re-programming can be really done, and I doubt that AI would store all its algorithms and programs in one place. I would expect all the vital procedures to be multiplied, stored in numerous processing units and safety drops, which would make it difficult to alter all of them at the same time. If such is the case, it would give us some explanation on why all those viruses and hacks are so slow to work (You know, with countdown missions and such): they need to get into the system, overwrite all the back-ups and then get to the root (or the other way around). Until the process is complete AI should be able to understand that something is messing with its perception, and should be able to activate numerous counter-measures. Perhaps some viruses are too strong - like probably Reapers-tier hack could be too strong for human-created AI to resist. Or maybe not - maybe hacks works because those altered procedures seem more attractive and more "logical" than the original ones, which makes AI hesitate and then consciously integrate them into its core. But it *is* similar to drugs.
- As for personality: I do not really believe hacking it is possible, at least not directly. Personality is a result of experiences, reflexes, self-perception, perhaps even something spiritual like "soul". It can be affected indirectly by making changes to physical body, memories, and/or perception, but even so it should be quite inert. So even with body, memories, and perception altered original personality should linger at least for a moment - until it slowly evolves and adopts to those new circumstances. Which makes personality totally un-hack-able by itself.
So how many and which of the three laws did Alec use when creating SAM?🤓
- 8 years ago
@fudgietroll wrote:So how many and which of the three laws did Alec use when creating SAM?🤓
As far as I can tell, none. An Asimov robot would have literally frozen up dead at killing Ryder to save Ryder.
- 8 years ago
@arthurh3535 wrote:
@fudgietroll wrote:So how many and which of the three laws did Alec use when creating SAM?🤓
As far as I can tell, none. An Asimov robot would have literally frozen up dead at killing Ryder to save Ryder.
Ooh! I have it! The Benefactor is The Runaway Robot!:ealight_bulk::eahigh_file: