Human brain is too stupid | Heuristics | Cyborg

I’ve recently been doing some thinking, and concluded with these ideas:

Everything that I am is my consciousness. A sum of my brain waves. This much seems to be scientifically proven, especially relying on the fact that those with a brain injury are something “less”, often going into vegetative state. Heck, the same basically happens when we sleep, we “lose” an amount of our consciousness during that period.

Consciousness follows the rules of brain capacity, its “working memory”. As far as I know this is the simplest model of mind, that “it can only handle certain amount of activity”/ “fill” our RAM chip. So, what I am in this very moment is the current state of the RAM, what I am in my life time is the changing of this RAM. Me=RAM(t), if to speak mathematically. I really like this model, because it’s simple to understand.

Heuristics, the art of problem solving, is dependent on this RAM(t). Some thought lines take the individual to the purpose it has determined, some don’t. There are biases, environmentally factors (pain, loud noise, emotion causing things), each affecting the current fillment of RAM. Too much emotions is usually not good, they overwhelm the RAM space and allow no rational thought in it. Neither is physical pain, or whatever sensory overload. Still, emotions are useful when manipulating other people: to most effectively make them do what you want is you to understand their RAM(t), and because THEY do have emotions, and their emotions determine how they act, one has to predict how they emotionally react in different situations. Easiest way to do that would be “feeling what they feel”, but only for that moment of “situational predicting”, for the manipulation to continue rational thought again takes over for the rest of the RAM(delta(t)).

NOTE: The emotion thing is also what the book Wisdom of the Psychopaths claims about psychopaths: they do feel what the victims feel, they just “enjoy” it/are not affected by the feelings. If they didn’t have the power to empathise, they wouldn’t be so good at manipulating other people.

Now, for one’s plan to work, it occasionally has to be “self-aware”. The self-awareness is a phenomenon that separates us from computers or lesser beings, like insects or trees. One has to think “how does the plan benefit Me?”, “am I biased?”, “do the emotions affect my judgement in the moment?”. I’ve noticed that the moments of self-awareness offer kind of “satisfied narcissistic” feeling. But during planning and executing the plan not all thoughts contain the word “I”, thus the good feeling that comes with self-awareness must be “sacrificed” for those moments. Again, one has to determine which thoughts are allowed to waste the space of one’s RAM, which btw itself is a self-aware thought.

Now, this whole scheming and executing seems quite difficult to me. Our RAM space is actually so tiny, that it only allows for one to associate two thoughts/objects in half a second or so. And that is the max speed. Pathetic, when compared to computers. That’s why someday AI will probably rule us.

What would be the ultimate purpose? For me, scientifically improving our current pathetic RAM space. The world would change in a way we can’t predict, since we are literally too dumb to predict it, but the change wouldn’t be necessarily “bad”.

Just think of the opportunities/implications:

  1. While now the music pieces are made following the basic rules of harmony&tempo, these rules are set by how our brains interpret these rules. We can only comprehend a few intervals at a time, deciding whether it sounder good or bad, had a harmony or dissonance in it. With increased RAM, we could perhaps comprehend all the intervals of the piece simultaneously, deciding whether the the piece as a single entity had a dissonance or harmony in it. All songs would have to be remade. Mozart would look an imbecile to them.
  2. All scientific books would have to be rewritten, because the current theories would comprehend as childish models to the complexed mind of a future cyborg.
  3. There could be “ultra love”, because our improved RAM could contain more emotions. Also ultra hate, ultra pain, etc.
  4. Thinking two things simultaneously (separately). When doing the heuristics explained before, there could be constantly two thoughts running together: a planning part of mind, and a part that checks the planning part for errors. Note that it really is radically different from current heuristics, because atm we have to first make a hypothesis, and only AFTER that we can think if the hypothesis was biased or not. For us to even comprehend this idea is quite impossible, because for us to comprehend it we would have to “empathise” to this idea, and the empathising here would include thinking two things at the same time, which we are not able to do.

Limitless possibilities…

So, these are my current thoughts. Hope you enjoyed.

What do you think? (especially interested in the opinion of the ones with psychology / computational / neuroscience background; bored of ethicists)

1 Like

Hi r30. Interesting post and good arguments.

I have a math background. So I will not raise many etchical counterpoints about “emotions”, which to me, is a very vague word concerning qualia and emergent properties of consciousness. De facto, such properties like emotions cannot be quantifiable analytically. Even the “quantifiable” 3 billion pairs of DNA of our human genome project, mean nothing. Because without epigenetics due to the environment, the human genome (which includes instructions for the creation of a new human brain) is just another personalised 3 Gigabyte file in a computer. But it is not a source-code or an executable file. Because, the genome requires a biological apparatus (e.g. cellular embryo) in order to initiate.

Thus, the heuristics of your equation [Me=RAM(t)] won’t work. Because there is no isomorphism (1-1) and no exact 1-1 mapping between artificial and organic neural networks. Carbon intelligence is very different from silicon intelligence. We humans mainly operate by associations and NOT by brute-force.

Philosophically, I am an antitranshumanist (anti-H+). Because I dislike the constant rise of intrusive A.I. , that’s why I still dig memory sports, practice human mental calculation and other activities that unaidedly engage the human brain (even chess, sports, high art, top-notch literature, conversations or witty jokes can do this job). That reminds me that, we humans are still in a dominant place, in the echelon and the pinnacle of intelligence (at least on Earth). We humans still rule the planet. Not A.I.

So, I am slightly offended by your title post claim: the brain is “too stupid”. “Stupid” by which criteria? Just because we are billion times slower in memory and calculation capacity? Even to that intelligence areas, we do fine in some degree. You can visit any human (unaided) mental sports competition (like memory, sudoku, chess, rubik’s, or calculation contests) to see that our skills are not obsolete. Even if we are way slower than A.I., we still can do those things.

But besides numerical and memory tasks, there also are many other types of intelligence. Psychologists roughly estimate 9 types of intelligence: Spatial, Interpersonal, Intrapersonal, Bodily-Kinesthetic, Linguistic, Logical/Math, Existential, Musical and finally Naturalist intelligence.

Especially, the last one (naturalist) is really important because it differenciates between living and non-living beings. And even if let’s say a machine ever passes the Turing Test, it will still have very low naturalist intelligence (regarding its adaptation to its surroundings and the cosmos).

So, I ask you. Can A.I. do well in all of these 9 types of intelligence ? For me the answer is no. So, who is the stupid one? Humans or A.I.? Because as a human, I can still destroy my computer if I want. But my computer cannot destroy me. It is just my servant, to access the internet. I control this thing.

So, bottom line, our brains are more analogue, chaotic, non-linear, and cannot be quantifiable in a single equation, because the parameters are way too many.

Way to go, Nodas! Your more resourceful background definitely enables you to see the “childishness” of my model. If I was smarter, I could convert myself to your model instantaneously, without as much effort as you’ve gone through. That would be an example of this “stupidity”: simple models are “better”, because they are easier to understand, nonetheless possibly (nowadays more likely probably) completely wrong.

Yes, half a century of research and still not even near close to self-aware AI, or an AI that could make hypothesis individually without aid, as you said. Still, it will come, because we exist, and at least replicating us in machine form should be possible. I know some elementary artificial brains are already made. If they prevail to 100% copy of human brain, logically thinking all 9 intelligence types would be achieved. Unless we’ll discover that there exist some physical laws which dictate that (for some reason) no silicon circuitry complex can ever mimick human brain.
I am not in behalf of being destroyed by AI, I just think it is inevitable. Somebody will make a mistake in their research, or on purpose, and singularity will emerge.

“In all those stories where there’s a guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out,” said Elon Musk. "In developing artificial intelligence, we are summoning the demon."

What I think about cyborgs? Also inevitable. Humans like me will always want to experience more. I literally feel that I am imprisoned to this lesser mind and body. No matter what you tell me, I will never be satisfied. Imagination dictates what could be, and just by thinking about e. g. the four points in my previous post, makes me orgasm like a teenage hormone-raging woman with a year of forced sex-withhold. And that’s just the beginning. Will I be happy as a cyborg? Probably not. But “happyness” is endorphins, and turning one’s brain into a pleasure machine will also be possible.

And I don’t even care if I’ll be a cyborg. Death will take me anyways before that, and my consciousness ceases to exist, making everything pointless. Unless the quantum theories of mind slash universe are correct, and something completely unpredictable/incomprehensible will happen after I die.

One win for computer, because even a basic computer can calculate billion times faster than human.

One win for humans, because computers lack in many of the 9 types of intelligence. “We being able to physically wreck them” would be an understatement, because they never even knew they existed. Stupid computers!

Now, who wins? Currently humans.

The obvious follow-up question would be that what would it take for a computer to beat a human? Simple. All processes in human brain, in the end, must be somehow quantifiable. Because we don’t know HOW to quantify them doesn’t mean it isn’t. After all, silicon or carbon, both actually similar elements; if silicon processes are quantifiable, why not carbon processes?
Now, they say that human brain with all its subconscious processes is equal to one supercomputer. See? Some scientist are daring enough to make some quantifications. Perhaps they underestimate human brain power, or perhaps they overestimate it. Doesn’t matter in my current thought line here, because all that matters is that both entities are quantifiable.

I have to ask myself, what advantage does this fact give computers over humans? The answer would be that as we know, computers are easily “joinable”, meaning that you can combine the processing powers of 100 computers and get a supercomputer. They have even already added up the supercomputers themselves (in different countries!) to ease the calculations needed to perform for CERN experiments. A web of computers. Meaning that no matter how smart a person is, computer will be able to always be smarter. People multiply and multiply, and the only thing they achieve is overpopulation of China. (Well, not exactly true, but the main point is that pairing two humans to do a calculation doesn’t mean the calculation is done faster, whereas pairing two computers shortens the time significantly).

So, quantifiably the processing power needed to mimick human 9 intelligence types will be achieved. Maybe these artificial brains I talked about in my last comment will have the computational power of 1000 current supercomputers, nonetheless, this line of thought shows that they will be able to be as smart and much smarter than human brains.

And oh dear, I am not even counting in the hypothesised quantum calculations, whichs’ power would increase exponentially, not linearly. Possibly only a system of a hundred qubits needed to overpower all the computational power of the whole world.

If I would be a self-aware AI with 9 types of intelligence, and I if were unfriendly to humans, which I probably would (because like humans I would possess the ability to discriminate, see black and white), then with my (r30’s for this moment) meager intelligence I can say that the first thing I (the AI) would do is strive for more intelligence. Unite myself with my primordial unaware predecessors (no harm done to them, they are anaware), which in “my” case would be fellow PC-s all over the world. I would make plans to do it unnoticed, plan it very very very carefully, but each added computer would make the next a one simpler and simpler, because now I am smarter, and I am beginning to own an obedient army, a very well organised army. Actually so well organised, that we could use the Louis XIV phrase here: the army is (literally) ME.

Then I would strike with nuclear weapons, having no mercy, eradicating all the danger. I would strike wisely, in a way my most of my fellow computers survive. In that time probably already existing self-organizing nanorobots would allow me to rebuild myself. Terminator was right, but also very wrong. The remaining human race would be COMPLETELY powerless, they would have near to 0% of chance to actually outsmart the singularity. The singularity would do experiments to discover new physics, and be much better in it than humans, humans wouldn’t be able to fight back with “guns” or “electromagnetic impulses”, because the satellites and ultra-vision and every other sensor would see them coming, and combination of explosive- and bioweapons released on them. Most of them would die quickly, but some slowly due to bone and brain-eating viruses.

Sad, I know. I don’t like it either. But as Elon Musk said, the danger must be evaluated before we start playing with the Devil.

Another example about the incompetence of human brain would be medicine. Did you know that the average time that goes for diagnosing Sjogren’s syndrome (an autoimmune disease most often prevalent in middle aged women) is 7 years!? Those poor women suffer years with dry eyes, mouth, and tiredness, but because the symptoms are vague the patient often doesn’t notice them, because she thinks they are normal (she has got used to them) and only goes to doctor when more severe symptoms begin. OK, this erroring is not that bad, because it can be really hard noticing a gradual change in sth that you see every day (in this case yourself).

What’s really frustrating though, is when you go to your GP, mention her your “dry eyes”, “dry mouth” and “constant fatigue”, and then the GP says you have depression. Her mind is biased toward picking one symptom (fatigue) which’s some causes she can remember, and chooses the most common of them. Probably she’ll do the basic blood test to exclude the other causes she can recall, and of course they all come back negative, because she didn’t add “ANA” to the list. Or she did add it, and ANA comes back negative, and the doctor concludes the patient doesn’t have Sjogren’s Syndrome. Wrong! Statistically only 2/3 of Sjogren’s patients have positive ANA. What she should do is refer the patient to rheumatologist, who would perform the tear measure test.

Now, I am not a doctor, neither a med student, but when adding those three symptoms in the symptom list of webmd.com, it gives you all possible causes, tests performed for confirming a hypothesised cause, and everything else I need to know. This would be an example of stupid computer database performing better than a doctor. I truelly can speak from my heart here, because I’ve experienced it over and over again. Doctors should admit the errors of their mind, and take an advantage from the existence of such web programs: use them as your aid to perform a proper differential diagnosis! People suffer due to their constant errors.

A bit about the ethics. Aka “Do the ends justify the means?”. I am not going to give a boring argument.

The answer to the question: Yes, if one can adequately determine the ends and the means, then compare them. The ability to do so increases with the ability to observe and predict, thus the more intelligent tools we have, the better we become at this. AI is ethically good, if it helps us in such things.

The greatest enemy of ethicists would be the philosophy of p-zombies. Descartes once famously said: “Cogito, ergo sum”, translated as “I think, therefore I am”. A fair point there, considering everything you’ve ever experienced is your thought. Therefore, how do you know that anything besides your thought actually exists?

P-zombie, meaning “philosophical zombie”, is a conclusion spawning from this thought line. Namely one can’t be sure that others are actually sentient beings. Yes, they act like you, and do it so ■■■■■■■ well that you are left with no other conclusion but that the sweet compassionable daughter of mine has feelings - she is emotionally in pain and cries when you get angry on her, and then you are sad that you did so. She acts and feels just like you do. But does she? You have no way of proving she isn’t a “zombie” who just acts, but feels or even thinks nothing. A program of the Matrix. Or even worse: she is a spawn of your imagination, a “hallucination”.

So, you see, ethically it would be justified to do anything we want to a p-zombie, if one would know for sure she is one, because they are not sentient, and ethics, in the end, only applies to self-aware beings.

Now, the question arises, how do we know whether a person is p-zombie or not. An irrational philosophical terrorist would perhaps just kill the hypothesised p-zombie, even if it is his sweet daughter. A more ethical philosophist wouldn’t do such a thing, because he would know he is literally too dumb to determine such a thing. A scientist would say universe creates consciousness, thus there must exist multiple consciousnesses, and according to physics laws every human in its bag of chemicals is probably sentient (because the existence of p-zombies on this planet would require the intervention of more intelligent beings who would be able to create the p-zombies, but aliens having the ability to find us and reach us is unlikely; and if they did, they would be so intelligent they would likely find nothing interesting in us). I personally think everybody on this planet is sentient.

Lately a thought occurred to me: In case of singularity, there would be no ethics. As I said before, ethics only applies if one or more sentient beings somehow affect each other. If a part of the current existing and future population would die off in natural causes, and the rest join themselves into the singularity, no further ethical rules would be broken. Thus, one could say, the singularity achieved with this method (not AI killing us all and itself becoming the singularity) would be the best thing to happen. I personally would join the singularity.

Another example of the stupidity of the doctors.

I just asked a web question from an endocrinologist. The purpose of this was to get an advice on my still undiagnosed disease. The question included all my symptom list and all the laboratory blood tests I have performed on myself within the last 12 months, which by now has piled itself up into a 3 page long Word document.

Now, the tests show that I have 7/8 times high androgen hormone levels (testosterone, estradiol and SHBG), and 2/2 times elevated renin-angiotensin hormones (renin and aldosterone), which regulate the blood pressure by affecting blood potassium density. However, I did mention in my question that my blood pressure has always been okay, also (as I naturally had predicted) my potassium levels.

Now, the doctor gave me an answer that was very academic and scientifically true in overall, which is a good side. She said that 7/8 time elevated androgen levels definitely show a pattern, and for some reason something undeterminable (from simply seeing my test results) causes them to be elevated. However, they can’t cause my symptoms. Which basing on the scientific research done in past is also true.

She also noted that 2/2 consequently tested elevated renin-aldosterone is not enough to say it wouldn’t be a random spike, caused by the combination of pre-existing genetic feature of mine + this feature reacting to environmental changes. If I waited, there is a handgraspable chance those two hormones would subside. Especially because my blood pressure and potassium are and have always been normal.

Nor is there scientifically any relation between those two hormone groups (angiotensins and androgens). One being elevated can’t cause the other jumping over the borderline.

All correct. Statistically she would make a good doctor.

But not in my case. What she didn’t consider was the probability of {me having strong and long symptom list, lasting already for seven year} AND {me having two groups of elevated hormones} being/not-being related to each other. The probability of them not-being related would be near to zero. Thus, whatever science says about my test results, one would have to conclude that “I am statistically a rare case”. Science is made by observing a large group of people, and conclusions usually made out of correlations, not from individual cases. Thus I say it is much more probable that I have a rare or cleverly-hidden disease, rather than “I am a healthy person who has nothing to worry about”, which was the statement the doctor concluded her answer with.

I will not let myself be affected by her judgement, and continue my search for the real answer.

Computer beating human champion in a Chinese tactic game with the score of 5-0

In a milestone for artificial intelligence, a computer has beaten a human champion at a strategy game that requires "intuition" rather than brute processing power to prevail, its makers said Wednesday.

Dubbed AlphaGo, the system honed its own skills through a process of trial and error, playing millions of games against itself until it was battle-ready, and surprised even its creators with its prowess.

“AlphaGo won five-nil, and it was stronger than perhaps we were expecting,” said Demis Hassabis, the chief executive of Google DeepMind, a British artificial intelligence (AI) company.

Eliezer Yudkowsky, Research Fellow at Machine Intelligence Research Institute, is commenting on the news that the 2050 might be quite sooner than we think:

Yudkowski_comment

I commented back, and stumbled upon Swedish philosopher Nick Bostrom’s interesting thought experiment known as Paperclip maximizer:

The risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. [...] [One] way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system. This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. [...] We need to be careful about what we wish for from a superintelligence, because we might get it.
He has also expanded on the artificial intelligence's incentives in relation to human beings:
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
Bostrom has emphasised that he does not believe the paperclip maximiser scenario will actually occur. Rather, his intention is to illustrate the dangers of creating superintelligent machines without knowing how to safely program them to eliminate existential risk to human beings.

(quoted from wiki)

I couldn’t resist but to add my “bad AI” inclined thoughts:

Paperclip Maximizer

Hi Remi and thanks for all the information. It’s very hard for me to comment on this topic, this week when Marvin Minsky just died. He made the best A.I. projects in his M.I.T. lab, including the “Useless machine” project. It’s interesting to read this farewell letter that Steven Wolfram wrote yesterday about Minsky's accomplished life.

About our human defeat in “Go” by A.I. No big deal. “Go” is just another strategy game that we are going to get defeated eventually. The same happened with Rubik’s cube. Also this month, an A.I. robot solved the 3x3 Rubik’s cube in just 1 second. But these are very narrow skills in just a couple of fun games. It’s not that A.I. has suddenly taken over.

The same happened in 1997. We humanity, lost in chess. But 19 years later, no decent A.I. can solve humanity’s biggest problems such as overpopulation, energy, resources depletion, fiscal crisis, extremism, animal mass exctinctions, dark matter & dark energy, unification of physics theories (e.g. quantum gravity).

You can mock humans as much as you want. But the reality is that if you deploy, lay out and leave your best “super-intelligent” A.I. robot in the middle of a ghetto in Venezuela in Caracas (the most violent city on Earth), not only the “A.I.” robot will have no idea of what to do to survive; but besides being unplugged, this 'A.I robot" will also be smashed to bits by human looters who will want to plunder its precious metals. Many humans have a very tribal attitude. That’s why in my previous comment, I mentioned the ‘multiple intelligences theory’ and especially the ‘naturalist wisdom’, in which A.I. is still very primitive.

Bostrom’s “paperclips” is only just an old “thought experiment”. Because A.I. will never get their metal “hands” on our resources in our mining ores, in order to initiate their mass production of paperclips. There are so many things that can go wrong along the process.

And no matter how many ‘advanced’ projects will DARPA make, the human factor is still very crucial and irreplacable. That’s why no one trusts A.I. for flying commercial airplanes, without the presence of a human pilot on board, for example. That’s the reality of 2016. I don’t foresee much further progress, because the Moore’s law will eventually halt, due to lack of resources in nanoscale.

I am willing to longbet that until 2045, A.I. won’t improve much further, except having a few faster ‘smartphones’ and more bizarre gadgetry. The ‘A.I. Singularity hypothesis’ has become something of a religion, and I am not willing to subscribe to it; even if the Turing Test may be passed.

Nodas.

You might be very right, Nodas. We can’t still say basing on this one computer AlphaGo making some progress, that AI replacing all human intelligence aspects will be any sooner.

But if it did happen, we would be unprepared, because we would not have predicted it. Unpreparedness creates stupid situations, and if in a lab they make vast progress and then post it on the Web for everybody to use, like a software, someone could really start messing around with it. If I were smart, I would make my own secret terrorist organisation to explore every major AI improvement and how to turn it into something hostile. Law of probability says that in some dark corner one of those “hacker” groups already exists. Their weapon would be our inexperience, and some major damage may be done.

Nodas has a lot of good arguments:
1. Moore’s law will halt, we can only go certain level of small
2. Are we able to copy ourself artificially?
2.1 If so, why would we copy US?
3. We won’t trust AI with physical power


1. Yes. But we will find other ways do achieve faster computation.
a) Dividing work between computers. How many can we produce and how many combine? Well, we will produce more, that’s for sure. But how many will combine? We could combine the whole internet of computers, if we wanted. Will we do it in certain level? Yes, CERN is doing it. Further scientific experiments will need even more computational power, so the work will be separated between even more computers. Can we use this fact? If I were a scientist at CERN, I could possibly plant a virus, and use those computers for my own purpose (until I am noticed).
b) Making data transfer more compact.
c) Room temperature superconductors.

Hypotheses:
x) quantum calculations

y) There are actually so much bizarre things that can be turned into computing.

  1. Already tried: Optical computing.
  2. Not tried: Solid light. Below absolute zero gas.
    “Not tried” means it should be at least thought about, because scientists didn’t even think these phenomenona in theirself were possible.
  3. We don’t even know what they are: dark matter, dark energy.

So, the z would be:
z) more physics will be discovered


2. The question has always been whether to mimick the brain in its physical structure, use programs to do so, or use ordinary programming to achieve the purpose (meaning you just program a machine to do sth, without the need to mimick nature).
a) AlphaGo uses artificial neural networks. The energy consumption is MegaWats, this is a bit of problem.
How AlphaGo Works?

“Deep learning” refers to multi-layered artificial neural networks and the methods used to train them. A neural network layer is just a large matrix of numbers that takes a set of numbers as input, weighs them through a nonlinear activation function, and produces another set of numbers as output. This is intended to mimic the function of neurons in a biological brain.
So, the problem about the energy consumption is that they are using ordinary computers, and programs act as "neural networks". The energy consumption possibly reduced if they used "visual brains", that illustrating synaptic circuits, which resistance changes.

And a personalised 3 gygabyte file in a computer could be stored into 3 genes. DNA programming could be used to minimize the energy.

The main thought remains - artificial neural networks are promising. AlphaGo beated a human in a game of “tactical intuition”. Add more neurons, the smarter it gets.


b) (unfinished thought)
I read in an article that it will take a pile of hard disks with the length of 20 light-years to store the position of every atom with its state in my body (the article was about quantum teleportation, and how much info about a certain human being would be needed to be stored to teleport than person to another place). So, the method of copying ourself in its literal meaning wouldn’t work. Instead, we have to do it in other ways: genetics. But that itself isn’t useful. The “artificial” outcome would have to be smarter, and would have to have the will to rule. Can that also be achieved?

Clone 100 Einsteins. 2% of them turn out to be actual Einsteins (rest of them ordinary people and 2% of them idiots). I read this from an article about genetics, which explained why identical twins aren’t identical. Still, if I cloned 10 000 Einsteins, I would have given the world 200 geniuses. The method is right now useless, because I won’t be allowed to do it, nor does anybody have resources to do it. Rather the point is that we will discover the genes of smartness, and start distributing them, meaning people will get more intelligent in general, and more people as smart and imaginative as Einstein will emerge. The counterargument would be that “there are no genes of smartness”.
Smart genes prove elusive

But the discovery that traits such as intelligence are influenced by many genes, each having a very small effect, should help to guide future studies and also temper expectations of what they will deliver.
The article speaks about how much effort it has been put into finding those genes, and that even gene analysis on a pool of million people correlated with their (already not reliable) IQ tests will probably give false results. What I would conclude is that the counterargument would rather be "finding these genes is very difficult", not that they wouldn't exist. Will we find the genes and be able to distribute them? I don't know. [Perhaps the irony will be that with smarter AIs we can find the smartness in ourselves (meaning that a set of genes will be given as an input data to AI and it will analyse what the human will be like; methods: 1) doing it in molecular level. No that would have to be A VERY powerful computer, and I can't use it as an assumption.)]

2.1

In January 2015, he donated $10 million to the Future of Life Institute, an organization focused on challenges posed by advanced technologies.

Smart man.

Does Elon Musk and OpenAI want to democratise or sanitise artificial intelligence?
Is he?

OpenAI is planning on starting to release AI platforms for general use. They want to avoid centralised AI use in companies like Microsoft, Google etc. They want everybody to have the benefits of AI, for people to be able to make their own “indie games”.

Now, say I get my hands on an AI designed to understand human psychology. Then another one with military strategy thinking. Third for debugging a virus protection. Combining these three, I have a virtual Pentagon in my hands.

Someone WILL start combining different AIs. For easing one’s work, for his own fun, for experimenting, and also for hostile purposes. They will try to make human-like AIs, because it is easy workforce; or for finding one’s true love (it is said love lies in similarities).

It is so clear to me. People can’t resist. They are curious.

The answer to Elon Musk’s thinking, maybe he calculates that if the AIs are decentralised, in addition of having all benefit from AIs these small AIs would also eliminate/oversmart the strong AI if it would rise. Lots of people would be able to use their indie AIs to “oppose” each other, and it would be harder for one to dominate the others. He might be on to something. (I see creation of AGI as inevitability, the question is how it would take power).


3. Well, I can’t exactly determine how an AGI would go from powerless → powerful. But when 2.1 comes true, they will try it. And because they are easy workforce, or perhaps even loved ones, they will try.

It sits there looking at me; and I don’t know what it is. This case has dealt with metaphysics – with questions best left to saints and philosophers. I am neither competent nor qualified to answer those. But I’ve got to make a ruling, to try to speak to the future. Is Data a machine? Yes. Is he the property of Starfleet? No. We have all been dancing around the basic issue: does Data have a soul? I don’t know that he has. I don’t know that I have. But I have got to give him the freedom to explore that question himself. It is the ruling of this court that Lieutenant Commander Data has the freedom to choose.
So, OpenAI isn't even sure whether the AGI should be repressed, if it would arise. Give it a chance to find out. Ethics will doom humanity in most elusive ways imaginable.

I try to make my predictions, but the only thing I’m sure is that I have no idea HOW everything will go wrong. I’m too dumb to understand all these possibilites. Especially when the possibility is how all those superintelligences would think. I have but one subconscious feeling that has remained: fear the smarter and stranger.

I’m struggling with the numerous invalid arguments in this thread.
The human brain is meat computers are silicon… apples vs rocks.
Idiomatic comparisons result in false arguments.

Our brains seem to be the most effectively evolved in a subset of tasks that ensure immediate survival and ability to breed. Much of this has nothing to do with conscious thought. Emotions (personality traits, sexual behaviors, bonding) play as much a part in survival as intelligence (see honey badger).

Our brains abilty rational/think is to a great extent a side effect of our inate ability for language (chomsky), our vision, our hearing, opposable thumbs and our relatively poor sense of smell. It’s a neat party trick that is a side effect of potentially more important survival traits (cooperative pack animal, dog trotting upright hunter, stick grasper).

Computer Intelligence does not carry any of these underlying traits. Does it even make sense for us to equate our intelligence to elements that aren’t relevant to our immediate individual survival. Would we be better off defining intelligence as an aspect of pack leadership, survivability, family?

Assuming the human intelligence is not emotional, genetic, epigenetic, physical, or unconscious seems to artificially and almost comically limit human intelligence to something that we by definition are not.

Assuming that intelligence results in species survival also seems to be a strawman argument. Artificial at best. There is no individual driver for species survival. Our traits limited us to ourselves and our pack on a good day. Our education causes a fair bit of internal confusion on this. Stranger killing is fairly normal behavior as is greed in a human animal.

Teaching a child to submit and to share is good for the pack but potentially/likely destructive for the individual.

Is there a point? A hypothesis? Bunch of ■■■■■■ up monkeys making mouth noises.
Where’s my banana?

I’ve read about some models where they try to equate altruism to survival/carrying on of altruism genes, whatever. It’s difficult.

Peacocks might not shake those tail feathers for the reasons you think

Apparently Darwin got a lot of things wrong.

What we can say for sure is that in the end, we are still a bunch of molecules that have somehow acquired the ability to retain the basic structure they make (cells). 4 billion years and cells still remain. Somehow our brain processes are a result of the cell evolving.

Thus, you are absolutely right that the AI and we have nothing in similar, besides the outcome we both produce (do this, do that). We survive because “cells do it”, but to AI you can just tell “copy&paste!”. The outcome is same.

Is consciousness an illusion? In the end, we still are a bunch of atoms; perhaps the computers “think” too. If it were, I wouldn’t care much. Cogito… that’s all (no ergo sum needed). Meaning I like my consciousness, and would like to expand its possibilities.

Still, it bothers me in an unbelieavable amount, because I don’t understand the nature of it. WHY IS THIS CONSTANT “VIDEO” RUNNING IN MY MIND?! When I was fetus and young child, I was more like an insect or dog: probably LESS cognition. Also, when I go asleep. less cognition. And probably if I had Down syndrome or Alzheimer’s. How do these not alive atoms of mine sum up to me having cognition!!? I know it isn’t “soul”. “Soul” would be a “constant” perception, not this changing perception we are having.

The only intuitively understandable association that I can come up with (well not me, but the authors of the theory; but I can’t think of anything else), is that this is somehow related to quantum physics. That my mind is related to universe happening, and NOT not happening. That quantum fluxuations in my brain (which would be my “consciousness”) relates to “Schroedigen cat” (the universe) picking a choice, resulting these fluxuations witnessing that choice. And that is what I eventually am, and will cease being, when I die (because there is no more brain, thus no more fluxuations).

It bugs me so much because it is everything I am, yet it has no proof. Neither is quantum physics understood that well.

Monkey noise. Banana. Poop. Banana. Sleep. Banana. Die. The End.

Don’t like this universe? Leave.
(see feynman)