top of page

AI Big Brother

Updated: Nov 27



Artificial Intelligence is- Hey! Come on! I haven’t even said anything yet!


Ahem. As I was saying, Artificial Intelligence is a term that usually creates more problems than it resolves. That’s because everyone appears to have a different definition, and most people’s definitions seem to shift without warning.


In order to have any sort of useful discussion, we must define terms. Good luck with that! Defining AI as “intelligence exhibited by machines, particularly computer systems” is not helpful, as that just leads you to have to define “intelligence”, which takes you to another bunch of terms that also require explanation, which doesn’t help much in a practical sense. Let’s leave that to cognitive scientists and philosophers.


For most people, the question takes a form like “can we create sentient computers?”


Sigh. “Sentient” might even be worse than “intelligent”.


Should we call it “true AI”, or “AGI”, or “Artificial General Intelligence”, or maybe “Synthetic Intelligence”?


Ultimately, though, what we’re usually talking about is whether it is possible to create a computer that is a “person” – that has “true” consciousness, as opposed to “simulated”. At this point, most people start talking about the Turing test, which essentially uses the ability to simulate human responses as a proxy for intelligence. It was a brilliant idea and a fascinating thought-experiment, but it was also 75 years ago – things have changed a bit. Modern LLM (Large Language Model) chatbots like ChatGPT can pass the Turing test, but cannot be realistically described as exhibiting consciousness – though we can be fooled. When it comes to “true” or “conscious” AI, I think the Turing test might be described as “necessary but not sufficient”, at least in terms of chatbots.


I would note here that most current “AI” tools are simply ingesting data, identifying patterns, and then responding based on those patterns. (It would be more accurate to refer to this as Machine Learning, but never mind.) There is clearly value in this approach, but does it lead to AGI? Some researchers, for example, the ARC Prize team, asserts that a current common definition of AGI (“AGI is a system that can automate the majority of economically valuable work.”) is wrong, and that AGI should be defined as: “... a system that can efficiently acquire new skills and solve open-ended problems.”


Based on what we know about the human brain, it is clear that it is made up of multiple modules that interact in complex ways, and it seems likely that consciousness is an emergent property of the whole network. Or, put another way, we probably don’t have a separate “consciousness” module in our brain, but rather “attain” consciousness due to the presence and interactions of various modules in the brain.


As we learn more about how the brain works, and we learn more about the different modules in the brain, we will eventually develop the technology to build simulations of these modules, and will use those to learn still more. Eventually, at some point, we will likely have the ability to build a realistic copy of a human brain. There are a host of ethical and technical challenges around this, but if we were to build a realistic copy of a human brain, one that is sufficiently complex and sufficiently close to the modularity of a living brain, we can realistically expect it to behave like a human brain. If consciousness is an emergent property of the various modules and their interactions, it seems clear that this “copy” would be conscious in the same way we are.


This is where experience comes in. We are more than our brain, and without sensory input and experience, we cannot develop. So, we would also need to simulate experience, and...


Oh, dear. We just created the Matrix... I guess that’s one way to do it, and it seems likely that it would work, though it could take centuries for us to develop the technology.


Another possible approach might be for us to try to build computer-based equivalents of the various modules we have identified in the human brain, and see if there is a way to build computers that include enough of the required modules to “attain” consciousness more or less “organically”. For example, Robert A Heinlein says in The Moon is a Harsh Mistress:


“When Mike was installed in Luna, he was pure thinkum, a flexible logic — "High-Optional, Logical, Multi-Evaluating Supervisor, Mark IV, Mod. L" — a HOLMES FOUR. He computed ballistics for pilotless freighters and controlled their catapult. This kept him busy less than one percent of time and Luna Authority never believed in idle hands. They kept hooking hardware into him — decision-action boxes to let him boss other computers, bank on bank of additional memories, more banks of associational neural nets, another tubful of twelve-digit random numbers, a greatly augmented temporary memory. Human brain has around ten-to-the-tenth neurons. By third year Mike had better than one and a half times that number of neuristors.
And woke up.”

Another option is to continue working on understanding what intelligence is, and keep working toward developing a synthetic intelligence, which appears to be the approach being taken by the ARC Prize. It’s unlikely that this work, by itself, will “solve” the problem – more likely, it’s another part of the overall puzzle.


Any or all of these options could potentially work, but we should be asking ourselves whether we should be pursuing any of them. And, if so, how should we address the myriad of moral and ethical questions around the work? We could ban such research, but is there any realistic chance that a ban would work? We could try to establish regulations to clearly describe what is or is not appropriate, but that would only really work if we understood ahead of time what we’re trying to learn.


The approach which is most likely to be effective is to establish guidelines and codes of ethics for both research and researchers, and try to minimize the risk inherent in research of this type.


Back to the technical, consider that a brain modelled after a human brain would likely behave like a human brain, and could be understandable to a similar degree. In contrast, if we are successful in creating a synthetic brain, would we be able to understand it? Or could it be a truly alien intelligence?


Is there even value in working toward AGI? Is it not more likely to be more useful (and less dangerous) to pursue narrow AI that can help us in various areas? Or to pursue brain-computer interfaces that enhance our existing abilities?


As an example, consider “centaur chess”, where human players use chess programs. In theory, this combination could provide the best of both – the creativity of the human, along with the memory and evaluation capacity of the computer. Would this approach not maximize the benefit to humanity, while bypassing both the technological challenges of pursuing AGI, and also many of the related ethical and moral issues?


In keeping with the title above, and several recent posts relating to George Orwell, consider a scenario where a group develops an AI Big Brother. Not simply an artificial human brain, but a synthetic intelligence tied into the internet.


Such an entity would not have the limitations of a human intelligence, and could be tied into everything we currently do. The problem is that we probably can’t imagine the capacity of such an entity – most of our fictional AI systems seem to assume a human-like intelligence, usually with greater memory and calculating capacity, and some degree of parallelism – ie, the ability to do several things simultaneously. As examples, consider HAL 9000 or Proteus.


But what if Big Brother was a single entity? What if the Thought Police were robots controlled by a single intelligence? What if history were being rewritten on the fly, so that any human would only see the official version? What if Big Brother WAS the surveillance state? True nightmare fuel, and it could just get worse from there on.


Maybe better if we just don’t, huh?


Cheers!

Comments


bottom of page