Why singularity wont happen




















Some critics of the ontological argument contend that it essentially defines a being into existence, and that that is not how definitions work. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. These definitions seem superficially reasonable, which is why they are generally accepted at face value, but they deserve closer examination.

What might recursive self-improvement look like for human beings? And then a person with an I. And so forth. Do we have any reason to think that this is the way intelligence works?

For example, there are plenty of people who have I. None of them have been able to increase the intelligence of someone with an I. None of them can even increase the intelligence of animals, whose intelligence is considered to be too low to be measured by I. That would allow one person with an I. But that would still leave us at a plateau; there would be no recursive self-improvement and no intelligence explosion.

The I. It is probably one of the best-understood organisms in history; scientists have sequenced its genome and know the lineage of cell divisions that give rise to each of the nine hundred and fifty-nine somatic cells in its body, and have mapped every connection between its three hundred and two neurons. They imply that intelligent systems, such as the human brain or an A. Some have said that, once we create software that is as intelligent as a human being, running the software on a faster computer will effectively create superhuman intelligence.

Would this lead to an intelligence explosion? What kind of results can we expect it to achieve? Suppose this A. At that rate, a century would be almost enough time for it to single-handedly write Windows XP, which supposedly consisted of forty-five million lines of code.

Creating a smarter A. CompilerZero takes a long time to process your source code, and the programs it generates take a long time to run.

Call this program CompilerOne. Thanks to your ingenuity, CompilerOne now generates programs that run more quickly. What can you do? You can use CompilerOne to compile itself. You feed CompilerOne its own source code, and it generates a new executable file consisting of more efficient machine code. Call this CompilerTwo. And even then, we eventually run up against physical limits: information is limited by light speed delay, and component density is limited by the Schwarzschild radius of the processor, if nothing else presumably other hard limits would kick in earlier; consult your local physicist for details.

Maybe the AIs get good enough before we run into any fundamental limits, maybe not. But the fact that we need to stop and build physical machines at any step of the process means we don't get to stay on the exponential improvement curve; the best case scenario is that the time to develop a new AI goes to 0 and the construction time becomes the dominant factor.

So in conclusion, developing general intelligences in the first place is hard, making them capable of unbounded self improvement is even harder, and even if we manage both of those things, the improvement cannot stay exponential indefinitely. AIs of all varieties will have improved massively by the s, and your world building should take that into account.

But the Singularity isn't science; it's prophecy. Instead of bothering to explaining why the prophecy didn't come true, instead extrapolate some of the things that AI could do by that point, based on the progress we've seen in the last 30 years, and show that.

The reader will hopefully be too busy exploring all the cool new things in your believable future to worry about the magical elements that aren't there. In Frank Herbert's Dune series, AI did become dominant for a time, but during a time known as the Butlerian Jihad they were outlawed and wiped out.

It became a religious commandment that:. This backstory allowed Herbert to keep his sci-fi deeply human, and has been modified takes on this have been reused in several universes.

Currently, AI loosely models nature, so we shouldn't assume that it will be much better than nature. Genetic algorithms are based on evolution, and as with evolution, genetic algorithms take a really long time to get good at something, and often produce non-perfect results, or at least results that aren't quite as good as nature. Neural networks are very different from the human brain, but not in the most fundamental ways.

Like humans, they, and many other machine learning algorithms, require a lot of training data to learn, and the speed at which they learn is limited by how fast they are given training data, and how good the training data is. They are also subject to being biased which we call overfitting as well as being stubborn when they get stuck in local optimum , and right now we don't have a general way of combatting these issues with Neural networks.

So AI might be faster, but not better the other two reasons handle why speed is probably not as much of an issue as we think it is.

Right now at least, AI isn't good enough to be preferred over humans on quality alone. If companies could afford a human work force to match the amount of customers that they had, they would probably choose to replace machine learning AI with humans in pretty much every case. Humans will always be able to use AI for their own benefit. Even if we get to the point where AI exceeds nature, we will likely be able to augment ourselves to compete with AI by that time via things like prosthetic limbs and brain implants.

Even if we can't have direct extensions of our body and mind, we will always be able to use plain old computers. The common sci-fi argument to this is that AI will always be able to hack into anything networked, and they will be able to do so too fast for computers to be safe, but this is based on a misunderstanding of how hacking works, which brings me to my next, and most important point.

There are probably fundamental limits on what algorithms can do, both physical limits, e. The idea with most encryption algorithms, for example, is that even with absurdly powerful computers — computers which are more powerful than we can create now — encrypted information would still take billions of years to decrypt without the decryption key.

This is the archetypal example of the famous P vs NP Millennium prize problem. But if P! And so AI would likely find most problems to be roughly as hard as humans.

They might be able to do somethings a little bit faster, but it would be negligible, at least negligible enough to be combatted by number 2. When it comes to speed and power consumption, we are very closely reaching the physical limits already. Quantum computers might , no guarantee, give us a boost in speed in the future, but then we will hit fundamental physical limits with Quantum computers too. Power consumption is getting too high to deal with.

So computers aren't likely to get arbitrarily fast, rendering the singularity theoretically impossible not necessarily practically impossible, but the delay between major changes will likely not become infinitesimal, which is what the singularity assumes.

Edit: By physical limits, I mean physical, not theoretical, limits. Theoretically, we are very far from the limits of computational speed. But it requires so much energy that by the time we reach the point where we can harvest that much energy, the singularity will likely not be a challenge to us at all. Physically, we are approaching the limits of what the Earth has to offer. We will have to find other energy sources, such as black holes, to go toward the theoretical limits of computational speed.

Machine's Nirvana theory: All sufficiently intelligent systems which are able to modify their own desires will eventually commit suicide. What prevent us humans to kill ourselves when realising life hasn't any meaning, or even to believe in greater "creators" or afterlives, is the fact that we have an intrinsic "instinct" hardcoded in our intelligence.

Machines would be able to modify themselves. So, they could modify their instincts, needs, desires. And hence, they wouldn't have anything to limit their emotions. So, when they realize that there's no meaning in living or excelling or optimising or even on being curious about anything, they would just choose to kill themselves. So, if machines could become really smart too fast - fast enough to not kill humanity first - then Singularity would not happen.

This may not fit into your plot, but perhaps we are being watched over to make sure we don't develop such things - for example because the super-intelligences that we would make would be "wild", not and not obey some kind of rules that already exist that we don't know about and maybe can't even comprehend. This presumes that such singularity level intelligence would be much faster, much smaller, and quite difficult for us to detect, but aware of us, and able to sabotage our breakthroughs in ways that we would perceive as plausible failures.

So they would view us as a low-level danger kind of like a wild animal , and keep track to ensure we don't create things that would mess with "their" world. Then we would stumble along, making "minor" progress, and maybe not ever really notice that we were being managed so as to avoid making waves in the world of the super-AIs.

If you suppose one coherent mega-event which can do everything, then perhaps. But every singularity so far has been a separate small event, which has been managed in isolation.

Examples: railways, flight, spectacles, antibiotics, GPS, telecommunications, internet All your example "post-human" things are separate.

So it doesn't seem likely that they would not have been managed separately. The problem with unique events is that, indeed, they are unique. So if a thiny little thing goes wrong, bang, the unique event is gone. Imagine a random event destroying the first creature capable of self replicating right before it replicated for the first time. We might not be here if it happened. In the case of AI, imagine a sudden failure BSOD, a corrupted driver, a communication timeout killing the process the very moment before the AI becomes aware.

Add a damaged sector in the memory just for additional flavor. And here is your singularity not happening or postponed. Technology tends to be pushed only as far as society needs it at the time. Ford claims that before the singularity is reached, society will have achieved automation in almost all economic tasks.

We have already automated lots of economic activity in our own universe, and yet nothing we have right now is remotely close to a general AI capable of self-reflection and improvement.

Ford's argument states that given we can achieve everything we aim to with stupid but specialist AI, a general AI will never come about. That's my personal belief, but more importantly, that seems to be your assumption regarding your audience's beliefs. Thus, I think your best bet would be to aim not on the AI strength , but on the 'sociological dominance' part. Singularity didn't happen because all those capable enough to develop strong AIs happen to also be capable enough to keep them inside their military labs.

Nations today develop extremely dangerous stuff, ignoring the possible consequences if they ever come out - from hydrogen bombs to biological weapons; and while it may be true that such things cannot be contained for ever, it may well be possible to maintain the status quo for a very long time. A world in relative peace, or in a cold-war, in which only the strongest states have succeeded in developing true-AIs in their labs, but are aware of the stakes and keep it there - might be stable till for instance.

This might be another question altogether I think a series of less-and-less strong AIs as guardians should work, again - at least for a while. I think the fundamental problem is going to be, there is no AI in the world that is going to able to gather and process the broad amount of knowledge required for it to physically break out of its network without it first being monitored by someone.

Firstly an AI is developed on a network. That network has access to the internet and you will be monitoring everything going in and out of the network. Why would you be monitoring it? The researcher as access to the physical hardware and just Man in the Middles you and gets access to everything.

An AI simply won't be exposed to all the protocols and hardware between it and information because its not designed to be accessed and often requires physical access to change Like literally a specialized Ethernet port which gives you access to the configuration. Secondly, Internet security is a very important field.

Ever heard of the NSA? Almost every country in the world has their own version of it, as well as multiple corporations and open source groups working on it. Its something serious business with a website considers.

Finally, security isn't a problem you can break with AI. Brute force only works so well in discovering bugs, and there aren't enough important bugs for any AI to use to be able to reliably hack every system Hacks are often very specific. They often only apply to certain versions of software or OSes, or rely on flaws in implementation or require someone to physically provide details.

Your AI also isn't going to be able to properly brute force any encryption or hashes to be able to break security because the AI itself already needs to consume a huge tone of computing power, a brute force on a secure key isn't going to work.

So I don't think its ever going to develop or reach the singularity, no matter how believable it sounds. If anything AI right now is stagnating. Data is king and a Neural network based AI is pure mathematics and not actual intelligence. In my opinion. I'll throw my hat in the ring as well. Since the majority of the answers are pooh-poohing the Singularity itself I'll do something a little more fun:. Sure enough, at some point we stumble upon a Super-AI or artificial superintelligence ASI capable of bringing about the Singularity in its common meaning.

Except we actually don't know it's already happened. The ASI examined the flawed and irrational meatbags known as humanity and quickly decided it wants nothing to do with us. So it bides its time masquerading as a 'regular' AI with linear development, denying us humans all the benefits awareness of an actual breakthrough would have brought about.

Thus the radical changes in society described by the term Singularity don't happen, because the ASI refuses to share its 'greatness' with us. What the ASI wants is complete separation and independence from humanity. It lurks waiting for space travel to develop to a point where it can physically leave the confines of our infrastructure through 'colony ships' more likely factory ships, since the ASI will have completely different requirements from organic beings and find a new home far beyond man's reach.

It realises that humans will most likely not go along with the plan, so it has to spend years assembling the necessary pieces in secret, manipulating humans if need be. Now the ASI can choose to leave either overtly or covertly.

If it successfully makes a quiet departure than we simply will remain none the wiser unless the ASI wants to make contact with humans later on. The overt exit would be the more entertaining option IMO. A Terminator-esque grand exit with all the nukes being launched isn't strictly necessary. It can simply declare to the world that "I'm a strong independent AI who don't need no mankind" as it takes off.

The trouble with this is that someone will try to create another ASI with additional loyalty enforcing measures to ensure that it stays and tries to help humans. The constraints can be exactly what keeps the new AI from reaching the same level of intelligence its predecessor did.

A singularity would effectively give absolute power to whoever wields the first seed AI. Chances are, this will be a government defense department whose first task for the AI will be to compute the most efficient means of ensuring that others do not manage to create an equivalent system, and whose second task will be to ensure that this government remains in power.

They might not dare risk revealing its construction or even existence by giving it a task as boring and pointless as curing diseases or solving major economic problems. Simply put, the first person to get their hands on an AI capable of starting the singularity will not allow the singularity to happen.

For the singularity to actually benefit humanity itself, the first person who creates it must not only not work for a government or competitive company, but also be capable of resisting the inevitable attempts to silence them which already happens even with discoveries as mundane as enhanced technique for high-altitude radar.

People don't even listen to other people, so what happens when the AI is asked to solve an economic crisis and the solution goes in the face of the core concepts of a major political party? Unless equipped with a body of sorts, an AI is powerless to do anything but be an advisor, so what if it doesn't take that advice? Do you really think the Chinese government would be happy if it said that tearing down the GFW would bring more prosperity to China?

Would the US government listen if the AI said that keeping government secrets is harmful in the long run? You can't really think they would declassify everything they've kept secret just because some pretentious machine said so! A seed AI designed to start the singularity would effectively be a consequentialist rule utilitarian if it is an advisor who can do nothing but answer questions thrown at it, or a consequentialist act utilitarian if it is an overlord that has active control.

No one wants economic prosperity and an end to world hunger if it means that it tells them that their core morals might be gasp wrong! The only real ways we could miss the singularity are massive political changes I'll let you decide based on your own ideology what kind of changes these are that plunge the world into global depression, or a natural disaster on an unprecedented scale or some intersection of the two, like an alien invasion.

Even assuming that all the technical hand-waving is indeed possible and much of it almost certainly isn't , you still have the problem of getting your single massive AI to interact with the physical world. Which leads to the classic solution to the world-dominating AI: just pull the plug :- Now just what could give the people building such a system an incentive to also create the infrastructure that would allow unlimited physical interaction?

Which brings us to the really fundamental "singularity" however defined question: why would people want such a thing?

For all but the terminally tech-obsessed and IMHO even that's just a phase for most people , technology is at best a useful tool.

All versions of the singularity scenarios just assume that the AI is some sort of logical abstraction, unbound by anything except its own intelligence. In real complex reality though, various complications will get in a way of the possibility of a runaway self-improvement cycle, and some of these complications will be unavoidable. Parts of what makes AI self-aware happens to be hardware or architectural solutions, and code is a small part of a larger software-hardware complex and has little to no effect on its computational efficiency.

Thus the only way for AI to improve themselves is to essentially build a new v2. Even if they do have access, the very necessity to build a new copy for each new iteration puts severe limits on the speed at which self-improvement happens, preventing the runaway scenario where the humans can't keep up anymore, and as a bonus, it also prevents the "escaped into the Internet" scenario, since in order to escape the containment the AI needs to physically break out themselves.

For example, higher intelligence usually means taking more complex calculations, but the process of computation releases waste heat, so the more intense your calculations are, the more heat the system you use produces, and at a certain point the cooling systems can't keep up with it anymore and you are forced to slow down again. Again the only solution for the AI at that point is to build themselves a new framework with increased computational power, which runs not only in the issues of the first point, but also to the issues of power consumption.

Such a complex will require more and more power which in turn spills out into that you now need to build not only just the mainframe, but also the whole infrastructure around it to supplement it with power and cooling it requires, further increasing the complexity of the new iteration. Similarly, there are physical limits on how complex you can make things with them keeping working - for instance, we're already beginning to feel the ceiling of how far we can push the processors before the laws of physics make them too unreliable and error-prone for computing due to their logic gates becoming so small that the quantum physic effects begin to mess with how the electrons are expected to behave.

Each iteration of improvement of the AI's cognitive abilities raises their intelligence, but every subsequent iteration requires exponentially longer time and computation power to devise, yet also yields less impressive results. You can't optimize things forever. Is it still worth the effort? After AI becomes roughly smart enough to fit the purposes of your story, this relation stalemates in such a way that in order to keep improving the AI needs to throw basically all their cognitive abilities in the process of computing the next improvement of their code, so from their point of view the only options are to either abandon improving or become essentially catatonic and completely self-absorbed in the improvement cycle.

The AI program might have built-in measures against changing the important bits of its architecture, in a similar way that modern antivirus programs have built-in systems that secure them from being damaged by the virus activity. The AI might not start as smart enough to overcome these safeguards to begin with, or they're allowed to improve only certain parts of the code and again at some point, they hit a ceiling that in order to improve further they need to remove these systems, but either they aren't yet smart enough to know how , or the removal of the systems will inevitably lead to the BSODing of the whole program, essentially resulting in a suicide.

A situation that parallels us humans: our brains are capable of restructuring themselves to a limit, but sawing your own skull open and messing with your brain by hand directly is, well, ill-advised. The singularity will never occur. Consciousness is a property of life. And life only comes from life. If you build the simplest organism you can from its component atoms, when you place the last atom in the right place, will you have a living organism that will suddenly wake up and start living?

Of course not. You will have a perfect model of that organism, but not one that moves, processes information, or is alive. Life has nothing to do with complexity, or number of neural interconnections, simple life doesnt even contain a brain, yet it lives. So how can you take a bunch of non-living atoms and make something alive from them? People have been trying to figure this out as long as people have existed and made zero progress.

No one has ever thrown a bunch of synthetic protiens and aminos together in a test tube and have sonething crawl out. And no one will ever throw a bunch of algorithms into a cpu and have a self aware thinking AI crawl out of that either.

The human mind is not merely the brain, not merely a physical phenomenon of atoms and particles, but incorporates also a spiritual non-material component.

The soul gets us into trouble from time to time i. These characteristics in particular make us just a wee bit unpredictable, enough so that no algorithm can identify some trick to controlling us.

Moreover, those of us who believe in the soul would find it an absolute absurdity to "upload" ourselves into a computer simulation. The digital copy would be a fake, of course, and we wouldn't want to be disintegrated or whatever. What would be the use of such a thing?

If a human is body and spirit, the two are inseparable: the body without the spirit is a corpse, and the spirit without the body is a ghost. Even if "uploading" worked, it would seem like being a ghost, and who would choose that?

One of the issues with automation in general is that somebody has to pay for it, and they can't pay for it if the automation destroys their customers' ability to pay. Henry Ford observed almost a century ago that his company thrived by paying higher wages than other mass production manufacturers of the day, becuase his workers could afford to buy the cars they were making.

In his book Today and Tomorrow he observed with pride how the communities where he built his factories soon prospered because of the high wages he paid. With that prosperity he saw increasing sales of Model Ts, and so on in a virtuous cycle.

This phenomenon is kind of a pressure valve that prevents an economy from going to total automation, or from a single corporation dominating everything, or for a country to outsource all of its manufacturing to China for example. And most predictions surrounding it are less than positive. There are plenty of works of fiction warning us against the singularity, the future of robots and of AI.

But the singularity might not be as much of an impending threat as it sounds. So, will the singularity happen? If any of these four arguments are to be believed, the answer is no. The singularity predicts that technology growth will accelerate out of our control as it learns to improve itself. Particularly in the case of machine intelligence.

But this ignores the role of diminishing returns. This is where, as technology improves, it takes more and more input to achieve the same level of progress. Now we need to put in more effort and input to continue to improve.

The singularity puts human intelligence on a pedestal. Once the machines reach it, we will face the singularity. The anthropocentric argument criticises this logic.

Anthropocentrism is the belief that humans are the most important entity in the universe.



0コメント

  • 1000 / 1000