Hypercomputere og beregning af det uberegnelige

Filosoffer og dataloger funderer over, hvordan man kan lave maskiner, der følger en anden opskrift end Alan Turings fra 1936, som er grundlaget for alle computere, vi kender i dag.

Nutidens computere er alle modelleret efter principper, som Alan Turing beskrev i 1936.

Med bl.a. IBM’s Watson-maskine, der har vundet Jeopardy i USA, er disse såkaldte universelle Turing-­maskiner, som først så dagens lys efter Anden Verdenskrigs afslutning, nu videreudviklet og perfek­tioneret til næsten fuldkommenhed.

Watson, og alle andre supercomputere og fremtidige kvantecomputere for den sags skyld er dog stadig Turing-maskiner, og der er derfor visse ting, som disse aldrig nogensinde kommer til at beregne.

Det interessante spørgsmål er, om man kan designe og fremstille maskiner, der er så fundamentalt anderledes, at de så at sige kan beregne det uberegnelige.

Sådanne hypotetiske maskiner er givet navne som hypercomputere og superturing-maskiner, og de er varianter over det, som Alan Turing selv kaldte orakel-maskiner.

Gennem mange år har der været en til tider intens og ophedet diskussion blandt eksperter om, hvorvidt en hypercomputer er andet end et abstrakt logisk begreb, og hvad der i givet fald hindrer en fysisk realisering.

En relateret problemstilling er, om hjernen er en sådan hypercomputer, og om denne kan efterlignes i en eller anden form for maskine.

Turing-maskiner har flere begrænsninger. En af dem er, at en sådan ikke generelt kan afgøre, om den stopper og leverer et ønsket resultat, eller om den kommer ind i en uendelig løkke.

En hypercomputer vil kunne løse dette såkaldte halting problem. Den vil vide, om den er endt i en uendelig løkke, eller om den på et tidspunkt kommer ud af en længerevarende tilsyneladende inaktivitet og er klar til andre gøremål. Det er en viden, som enhver bruger af computere vil være lykkelig for at få.

Uenigheden blandt eksperterne om hypercomputere går så vidt, at de langtfra er enige i om, hvordan de skal fortolke de tanker, Alan Turing og andre af hans samtidige gjorde sig om, hvad maskiner kan beregne.

Filosoffen Jack Copeland fra University of Canterbury i New Zealand beskæftigede sig i sidste uge med disse spørgsmål ved forelæsninger på Københavns Universitet, hvor han de seneste måneder har været gæsteprofessor.

Jack Copeland opfandt ordet hypercomputer i en artikel i 1999 i Scientific American, hvor han skrev en artikel om det, han kaldte Alan Turings glemte ideer inden for computervidenskab.

Han er en af verdens førende eksperter i den filmaktuelle Alan Turings værker og tankegang, og vi skal helt tilbage til Alan Turings banebrydende artikel On computable numbers fra 1936 for at finde udgangspunktet for den nuværende diskussion.

David mod Goliat

I denne artikel tog den på daværende tidspunkt ukendte 24-årige britiske ph.d.-studerende et opgør med datidens matematiker over dem alle, tyske David Hilbert, som var berømt for i 1900 at have sat hele tonen for den matematiske udvikling i det tyvende århundrede med en liste over store uløste problemer.

I 1928 opstillede Hilbert en ny udfordring i form af det såkaldte Entscheidungsproblem (beslutningsproblem), der sigtede mod at mekanisere eller automatisere den måde, hvorpå man kan lave matematiske beviser.

Ind fra sidelinjen kom Alan Turing og viste, at den tyske mester tog fejl, når han mente, at det ville være muligt at lave en sådan algoritme, der helt generelt kunne afgøre, om noget var rigtigt eller forkert.

Turing gav et matematisk bevis for, at der er visse ting – eller visse tal – en maskine ikke kan beregne.

Tallet pi er som bekendt et irrationalt tal med uendeligt mange cifre efter decimalkommaet. Til trods herfor er pi et beregneligt tal, idet vi kan stille en computer den opgave at finde ciffer n efter kommaet, og den vil kunne gøre det, uanset om n er 1.000, 1 million, 1 milliard eller et hvilket som helst andet tal.

Der findes dog irrationale tal, for hvilket dette ikke er muligt – og derfor kaldte Turing sin artikel ‘On computable numbers’.

Men hvad er egentlig en computer, kan man med rette spørge?

På Alan Turings tid var en computer et menneske, der udførte matematiske beregninger. Computere var ansat i forsikringsselskaber, ved forskningsinstitutioner (civile eller militære) og andre steder, hvor der skulle udføres mange beregninger.

Computere fik udleveret data­materiale og en forskrift for de beregninger, de skulle udføre. I mange tilfælde anede de ikke, hvad deres beregninger skulle bruges til. De regnede blot løs – præcis som vores maskinelle computere gør det i dag.

Alan Turing definerede i sin artikel en beregningsmaskine (computing machine) som en idealiseret udgave af en menneskelig computer, dvs. en universel maskine, der kan udføre alle tænkelige opgaver i henhold til et program lagret i computeren. Og det blev efter den opskrift, de første universelle computere blev lavet.

Men som Alan Turing havde vist, var der visse funktioner eller tal, som en sådan maskine – som senere fik navnet en Turing-maskine – ikke kunne beregne.

Den amerikanske matematiker Alonzo Church havde samtidig, på anden vis, bevist det samme. I dag er den fælles indsigt samlet i den såkaldte Church-Turing-tese. Problemet er, at den formuleres på forskellig vis, hvor Jack Copeland skelner mellem to hovedformuleringer.

Den første, som er den Copeland tillægger Turing, lyder i Copelands formulering sådan:

‘En universel Turing-maskine kan udføre enhver beregning, som kan udføres af en menneskelig beregner’.

Alan Turing var nemlig i sin artikel inde på, at man kunne forestille sig et såkaldt orakel, der kunne give svar på spørgsmål, en universel Turing-maskine ikke kunne beregne. Ved at inkludere oraklet ville man få en såkaldt o-maskine, der ville udføre beregninger på anden vis end en menneskelig computer.

Hvordan sådan en o-maskine skulle kunne laves, gjorde Alan Turing sig ingen tanker om i 1936.

Den anden hovedvariant af tesen lyder: ‘Alt, som kan beregnes med enhver form for maskine, kan beregnes med en universel Turing­-maskine.’

I dette udsagn ligger implicit, at o-maskiner ikke findes. Denne kalder Jack Copeland for M-tesen, hvor M står for maksimum, forstået på den måde, at det er det maksimalt stærke udsagn.

Når Turing i det hele taget beskrev den logiske mulighed af o-maskiner, var det i følge Jack Copeland fordi, Turing mente, at der var forskel på matematikere og menneskelige computere.

En menneskelig matematiker kunne med hjælp af intuition eller anden form for indsigt udføre opgaver, som en mekanisk orienteret menneskelig computer ikke kunne.

Jack Copeland forklarer, at Turing anså, at hjernen til enhver tid fungerede som en Turing-maskine – men som forskellige maskiner på forskellige tidspunkter. Der var knyttet en form for tilfældighed til springene mellem forskellige ‘maskiner’ – og derfor ville hjernen i det lange løb være uberegnelig.

Et af de tal, som Turing viste var uberegneligt med en computer (menneske eller maskine) efter en mekanisk forskrift (program), anså han for i princippet at kunne beregnes af en matematiker ved, at denne skiftede mellem forskellige metoder.

Jack Copeland mener, det interessante spørgsmål derfor er, om man kan overføre dette til en ny form for beregningsmaskine.

Mange mener dog, at fysikkens love forbyder en realisering af hypercomputere. Kritikere mener eksempelvis, at det vil kræve, at en uendelig mængde information skal findes inden for et endeligt afgrænset område – hvad der er umuligt. Jack Copeland mener dog at have tilbagevist denne påstand.

Han står da heller ikke alene med at spekulere i og mere konkret undersøge muligheden for nye metoder til design af computere, som andre kloge hoveder anser for en håbløs mission.

Turingeksperten Andrew Hodges, hvis bog ligger til grund for den biografaktuelle film ‘The Imitation Game’ om Alan Turing, mener, at Jack Copeland helt har misforstået budskabet i Turings artikler, og han har kaldt Copelands artikel i Scientific American fra april 1999, hvor ordet hypercomputation blev lanceret, for en aprilsnar.

Da en anden førende Turing­ekspert, Martin Davis, der er professor emeritus ved New York University, i 2006 blev bedt om at skrive en introduktion til en særudgave af tidsskriftet Applied Mathematics and Computation om hypercomputere, leverede han et indlæg på fire sider med titlen ‘Why there is no such discipline as hypercomputation’.

Fakta: Analog superturing-maskine måske på vej

Hava Siegelmann, der i dag er professor ved University of Massachusetts i USA, lancerede i midten af 1990’erne tanker om en analog computer i form af et neuralt netværk, der kunne udføre beregninger, der ikke er mulige med digital Turing-maskine.

Hendes artikel i Science i 1995 blev mødt med kritik fra bl.a. matematikeren Peter Shor, der i dag er kendt for en algoritme, som kan udnyttes af kvantecomputere til et knække krypteringsnøgler. Han anerkendte dog, at spørgsmålet om, hvorvidt analoge computere ville kunne overgå digitale computere, var vigtigt at tage fat på.

Steven Younger og Emmett Redd fra Missouri State University arbejder i disse år på at realisere Siegelmanns tanker. De mener, at denne form for superturing-maskine ikke er udsat for de fundamentale begrænsninger, som andre forskere tillægger mere generelle hypercomputere.

Tips og korrekturforslag til denne historie sendes til tip@version2.dk
Følg forløbet
Kommentarer (25)
sortSortér kommentarer
  • Ældste først
  • Nyeste først
  • Bedste først
#3 Allan Jensen

En teoretisk superturing maskine ville måske kunne løse standsproblemet for turing maskiner, men den ville stadig ikke kun løse det for den selv. Problemet med at løse det, er netop at man kan bruge en sådan løsning til at konstruere et paradoksisk program der ikke kan løses. Den eneste vej uden om det er kun at løse det for maskiner med et mindre beregningsrum en den løsningen kører på.

  • 0
  • 0
#5 Jens Klausen

Ideen kræver lidt udvikling. Menneskets intelligens kommer måske derfra?

https://plus.google.com/106485571915791440080/posts/X2hspqCC8Tg

I propose the following to possibly enhance the detection of non randomness in quantum fluctuations generating electronic noise that could evolve to detection of something beyond the human state of being.

In short: Use many small avalanche breakdown zener diodes to get a high electric field, many localized measurements and a high initial amplification .

Digitize the entire noise waveform from the zener diodes.

Use a genetic algorithm that evolves populations of neural networks to detect correlations between the many noise channels depending on outside events like an operator wishes to move a point on the screen to the left, down, up or down. But the events can of course be as many and varied as the other events that true random number generators have been used to detect anomalies in.

Is it only the genome that codes the structure of the brain so that scientific theories always gets better and better or is it a far more advanced continuous coding in the form of ordered quantum fluctuations from an invisible realm in the brain, though together with the brain resulting from the genome, that are mostly responsible for the fact that scientific theories are getting better and better all the time or is it something else? If it is only the genome that are responsible for scientific theories always getting better and better, how do one account for, that none of the best computer programmers are even close to building a machine that by itself can improve science? Is natural selection on events seemingly unrelated to modern science really that powerful in shaping the genome, so that the genome by itself can improve scientific theories?

Why can the supposed code in the genome that make scientific theories better and better all the time, not make a similar code elsewhere? Changes in computer code can be made much more rapidly than what have been done in the genome, Random or guided, but no group of programmers have succeeded in making very powerful AI able to by itself to make better and better scientific theories. I say that perhaps such a powerful code that can make scientific theories better and better by itself that has a size less than 3 GB maybe can not exist neither in the genome or elsewhere. I propose an experiment to investigate whether order in quantum fluctuations originating from an invisible realm maybe are responsible for scientific theories getting better all the time:

I think that a brain can not make better and better scientific theories without some careful guidance in how the neurons and synapses are growing. A negligent or absent control with the growing and deletion of neurons and synapses will invariably lead to scientific theories getting more and more incorrect due to the fact that there are many more wrong theories than correct ones, I hold. That control that I hold is needed must come from the genome if not from elsewhere outside a brain defined to be following the already known natural laws and not much else.

Then if it is so, why is something similar not witnessed in computers, that can mutate a genome that defines a neural network simulated in a computer, thousands of times each second and make some evaluation of the result. Much faster than what is happening in the genomes of humans where some mutations happens once every 20 years or so where after the result is undergoing some evaluation.

I am not convinced that faster computers will change this.

I do not think that the control on how Einsteins brain developed were negligent or absent. Einstein by the way could not accept that quantum fluctuations were random.

http://en.m.wikipedia.org/wiki/Bohr%E2%80%93Einstein_debates

The next shock came in 1926 when Max Born proposed that mechanics were to be understood as a probability without any causal explanation.

Einstein rejected this interpretation. In a 1926 letter to Max Born, Einstein wrote: "I, at any rate, am convinced that He [God] does not throw dice."

Bells inequality could rule out maybe all kinds of fixed hidden variables created together with the particles if there is no non locality. The measurements confirming non locality do NOT RULE OUT ALL KINDS OF DYNAMICALLY CHANGING NONLOCAL HIDDEN VARIABLES IF ONE LIKE, that are changing in a non random way, sometimes at least maybe partly controlled by a supreme intelligence refereed to as God by Albert Einstein, or other intelligences I think? The questions to me is: What else could improve scientific theories over time?

Bell's theorem states: No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.

I can understand why order can increase in a genome so that the the best swimmer in water can evolve, because the somewhat random mutations and recombinations in new beings of a particular species will tend to favor those that can swim better using a minimum of energy. The genomes that give better swimmers will be passed on more and the nucleotide base pairs that gives bad swimmers will not be copied so much since the bearers of such nucleotides in their genome will get less offspring. The same with the IQ in humans in some environments. Later when we can synthesize DNA cheaply this process of getting better genomes could speed up tremendously.

But I can’t understand how the scientific theories for instance in physics  are getting better and better in a generation. Natural selection in the genome don’t work in such a small periods of time. What principle removes the more wrong scientific views  and spreads the more correct scientific views in the brains of scientists and lay people so that science and technology evolves within a generation of 20 years for instance? Neural networks alone does not seem capable of that? I have never seen an explanation that should make them able to do so.

It can for instance not be by the process of the scientists with the wrong views dying before the scientists with the correct views. Since the scientific theories improve within the the time span of a single generation.

There are many more wrong scientific theories, than more correct ones. Should the second law of thermodynamics not prevent better and better theories of becoming dominant? And then add to the neural networks the, according to theory, completely random quantum fluctuations. Then better and better scientific theories seems even more impossible.

I have never seen an explanation on how a neural network alone could create consciousness or the illusion thereof either.

I would think that connectomes of brains each simulated in a very fast conventional  computer connected to the outside world with cameras and microphones and able to manipulate the outside world like robots would never be able to develop better scientific theories, but would devolve scientific theories, since they would have no way of judging the better theories from the worse, or if they had a way to judge in the beginning they would lose that ability more and more over time.

I speculate that what is happening is that the quantum fluctuations in brains are not random, that signals from higher worlds or parallel realms or universes are coming in all the time and are doing many things including making better and better scientific theories. Maybe dark matter is in other realms and we only feel the gravity but maybe we are also connected via quantum fluctuations and maybe higher intelligences reside there?: https://www.youtube.com/watch?v=e4nnpg4N35o

Who else were mentioning a invisible parallel world? Plato did: http://www.trinity.edu/cbrown/intro/plato_two_worlds.html Plato might not have been right about all the specifics, but he might have started a field of science, that in these times maybe have been abandoned by mainstream science, which may not be a good thing?

This way of improving scientific theories, by the influence of quantum fluctuations, could also work on beings living in the physical universe we observe around us that are 1 million or 1 billion years ahead of us in evolution. Evolution seems to over time  to produce more and more intelligent living beings. This is what we observe have happened on Earth. If that is true the quantum fluctuations should contain information enabling us to discover a science far beyond the science we now are aware of.

We have the Fermi paradox, why don’t we observe aliens? Maybe in a not so far future humans or what they have become may prefer to live in other parallel realms, maybe where there are less impediment to learn from what we in our realm describe as quantum fluctuations, that could be a shadow play of something far more important? Maybe we are not evolved enough to escape the capture of the physical universe we see around us? But soon we may be and so the transmission of radio signal from an intelligent species may not last for more than a few centuries, or they may find better modes of communication? But if some very highly evolved beings would voluntarily choose to live in the physical universe, why could quantum fluctuations not teach them in the time they use in the physical universe too?

There could be an enormous amount of information hidden in quantum fluctuations if only we could develop progressively better and better keys to decipher them?

There is a huge electric field over the cell membrane of each neuron and synapse in the brain (about 15.000.000 volts/meter ) and that could amplify the effects of quantum fluctuations in the cell membrane, I think? In a very strong electric field electron positron pairs would be created by quantum fluctuations and produce a current in the electric field generated and controlled by quantum fluctuations, but in a lesser electric field local charge inhomogeneities would be created and produce a current too, I think, and the quantum fluctuations could push on molecules transported in the ion pumps, influencing the precise time the neuron fires. Also the moment when a quantum tunneling event occur could be controlled and not just be random? In the many sodium-potassium pumps and sodium- and potassium-channels and elsewhere the signals in the brain will be influenced by quantum fluctuations, maybe sometimes as a butterfly can influence the weather, because of the butterfly effect?

But maybe the effect of quantum fluctuations is larger compared to the effect on the neural activity in the brain than the signals in the butterflies brains that form the weather some months ahead are to the effect of that weather or else it could be difficult to measure.

If the quantum fluctuations are controlled very precisely or at least in part, by a very advanced intelligence they could maybe be rather random looking but still animate a living being with an intelligence that improve scientific theories, and influence developments in small groups of people and the society as a whole, but I think they could not be completely random.

If one are on the moon or maybe in whatever galaxy, if one could get there we would still feel as if we are the same person and could maybe function as normal without much changes but maybe something new could also be added that are not solely dependent on the physical surroundings but maybe also on the nature of the Gods that generate the quantum fluctuations in that particular place? People themselves may be part of Gods?

There could be a God reigning over a field as large as the field of the natural laws we now know and can later be discovered and as long time as those laws exist?

“To travel is to live.” ― Hans Christian Andersen, The Fairy Tale of My Life: An Autobiography Edgar Mitchell's Samadhi Experience https://www.youtube.com/watch?v=8d56dwSm2YQ

The more compressed a computer file is the more random it appears to be to someone that does not have the key to decode it. I also believe to have read that the so far unknown digits in pi and e far out perhaps after the first googolplex digits will be indistinguishable from random digits if they were given to us. So what appears to be random may not actually be random.

I think the brain is not an advanced key enough to decode signals that, by using what technology can do today, are indistinguishably from completely random quantum fluctuations. I think that maybe quantum fluctuations can transmit information without changing any of the conserved physical quantities in a closed system except in very short periods of time. But that may be enough to animate a living being. Signals maybe also travel to higher parallel realms without altering any of the conserved physical quantities in a closed system, except when delta time is very small.

It were the quantum fluctuations in the very early universe that determined the large scale structure of the local galaxy group and probably also the Milky Way, so they can have an effect on the physical world: We are amplified quantum fluctuations - YouTube https://www.youtube.com/watch?v=ltK8aR9uHW0

Could there have been a deliberate plan behind how the quantum fluctuations manifested themselves in the very short time span, visible in the Cosmic Microwave Background Radiation, that produced the large scale structures of the universe and the Milky Way, so that what was supposed later to have an opportunity to happen could happen? Is the quantum fluctuations always of such an intelligent nature and is there a Divine intelligence far beyond the human intelligence always working much much faster than the human mind and are we most of the time unable to perceive that?

Could there be a dynamically changing field of a very advanced intelligent nature in other parallel realms that are able to animate whatever have the structure in this universe so it can appear animated to us, like animals and humans?

HERE IS A METHOD TO TEST MY HYPOTHESIS:

Take a number of Zener Diodes in a Faraday cage and make an electronic noise channel for each of them, by letting a current flow in their reverse direction. Like that there will also be a high electric field ( about 30.000.000 volts/meter ) where the noise is generated inside the zener diode. So the noise here like some of the noise in the brain, are is also generated where there is a large electric field.

In the brain there are trillions of separate locations that can be influenced independently by quantum fluctuations and it is unclear to me how many zener diodes are needed and how small they need to be, in order to detect a clearly measurable effect of non randomness in quantum fluctuations, but it could be possible, because I don’t think the experiment have been done yet. Which could make such an experiment a remarkably low hanging fruit?

Digitise each zener diode noise channel with an AD converter after the noise have been amplified and feed those “noise” channels into a feed forward neural network whose weights are determined by using a genetic algorithm to evolve the weights in a population of networks.

An operator could for instance try to make a dot, controlled by the zener diode noise feed into the neural network, to behave like the operator intends.     With no, so far, known scientifically possible connection established between the person and the zener diodes.

While the operator is intending that the dot on the screen should move to the left the neural networks that moves the dot to the left using the zener “noise” channels, will be favoured in the simulation of natural selection of neural networks performed in the computer. The same procedure with all the other directions the dot can move in.

After a long time of training like this, maybe both the operator and the neural network may have evolved such that the operator can move the dot around on the screen simply by intending it, even though the zener diodes are isolated in a Faraday cage. A situation perhaps similar to the learning that takes place when humans are learning new skills, like moving their arm in different directions.

Another question is: How should the digitized amplified zener noise be presented to the neural networks? Maybe it could be beneficial to Fourier Transform the noise signal using FFT and convolute some of the zener diode noise signals together before the presentation to the neural networks?

WHY DON’T ANYONE PERFORM SUCH AN EXPERIMENT? I HOPE TO MAKE SUCH AN EXPERIMENT MYSELF BUT OTHERS WOULD HAVE MORE TIME AND RESOURCES TO GET ANSWERS FAST.

Is the above hypothesis true or false, using the scientific method?

I might add that some work may have been done detecting intentions small effect on matter already at Princeton: http://www.princeton.edu/~pear/experiments.html Global Consciousness Project (GCP): http://www.youtube.com/watch?v=itQMALL__bE

and elsewhere: "Science and the taboo of psi" with Dean Radin - YouTube, Uploaded by GoogleTechTalks https://www.youtube.com/watch?v=qw_O9Qiwqew

COULD THE ABOVE DESCRIBED EFFECT IF ANY BE EVOLVED TO A POINT WHERE WE CAN OBSERVE WHAT IS HAPPENING IN OTHER PARALLEL UNIVERSES AND MAYBE GO THERE OURSELVES (WE MIGHT BE THERE ALREADY?), MAYBE IF HIGH IQ PEOPLE BEGIN TO WORK ON THE TECHNOLOGY AND PRODUCTS CAN BE MARKETED TO DRIVE THE INVESTMENTS IN THE TECHNOLOGY OR IF PRIVATE CHARITY OR GOVERNMENT FUNDS ARE USED TO INVESTIGATE?

I THINK THAT IF SUCH AN EFFECT IS THERE, ONLY VERY FEW CAN NOW TELL WHAT A TECHNOLOGY USING IT COULD EVOLVE INTO. Like in much basic science.

John Maynard Keynes bought many of the papers by Isaac Newton written in code and tried to decipher them. He discovered that Newton was very interested in religion and the occult. Maybe the occult is a too strong word because it does not appear that Newton yielded to outright superstition. My interpretation: Where else would maybe the most important scientist of all time have gotten his inspiration from, if not from the occult?

http://www.youtube.com/watch?v=d-w_2C8WfAw Some missing parts can be seen here: https://www.youtube.com/watch?v=sdmhPfGo3fE

Maybe Newton's occult experiments were for proving how intervention from other worlds happens in this world? I don't know but I think he did not believe that everything was put in motion and the left to run without any external intervention. He believed in some divine intervention. Maybe a little bit like Einstein that did not like a mechanical universe with randomness added to it from Quantum Fluctuations?

Einstein liked inventing phrases such as "God does not play dice," "The Lord is subtle but not malicious." On one occasion Bohr answered, "Einstein, stop telling God what to do." But Maybe Einstein was right in that quantum fluctuations are not random?

Maybe Newton where looking for a life force in chemistry in the Diana's Tree http://en.wikipedia.org/wiki/Diana%27s_Tree

Maybe the technology in the time of Newton and Einstein were not capable to show what they wanted to show? But now we may be on the brink of beginning to prove Newton and Einstein right in a big way? Right in the sense that there is an intervention from a higher forces or intelligences in the world.

Maybe electronics and computers can be used. Maybe Quantum Fluctuations are not the most random things that we know of.

Maybe one of the most harmful things in atheism is that there is no consequences that the thoughts and actions one had and did during the earthly life that can influence what happens after the earthly life. One is highly motivated by the prospect of earning money in the earthly life and one is likewise often keen to avoid what can cause later pain in the earthly life, like engaging in criminal activity that by the society have been deemed as punishable. Now if one believe there is no consequences after death, atheist leaders in atheist regimes can be some of the most brutal if they think they will avoid the retribution some may wish to inflict on them well while being alive on earth.

  • 0
  • 10
#6 Jesper Louis Andersen

Det var Kurt Gödel og ikke Turing som ødelagde Hilbert og Russels programmer.

Turing's bevis er stærkt inspireret af KG's aritmetrisering af syntax (Gödel numbering), hvor hvert program bliver knyttet til et tal. Og det er Church bevis også. Men KG's 1. ufuldstændighedssætning er alligevel noget andet.

Det er lidt subtilt, men KG's arbejde handler om at der er visse propositioner der er sande, men ikke kan bevises i et givent axiomatisk system. Hilberts Entscheidungsproblem er essentielt at udvikle en algoritme, der givet et input bestående af axiomer og en sætning svarer enten ja eller nej. Algoritmen skal svare ja, hvis den kan aflede sætningen ud fra axiomerne, og nej i alle andre tilfælde.

Intuitivt er ideen med algoritmen at den skal prøve alle muligheder som aksiomerne giver anledning til og derved udtømme søgerummet, hvorefter den kan svare ja, hvis den finder en løsning og nej, såfremt den udtømmer søgerummet.

Bemærk at det er ligegyldigt om sætningen faktisk er sand eller ej. Hvis det er en af de ikke-beviselige men sande sætninger som KG beviste eksistensen af, så skal algoritmen svare "nej", thi det ikke er muligt at bevise sætningen med aksiomerne.

Algoritmen giver også mulighed for at "lege" med axiomerne. "Hvad nu hvis vi tilføjer det her axiom, kan vi så bevise sætningen?". Det ville have været lækkert at have en sådan algoritme. KG's sætning siger at selv om vi tilføjer et nyt (konsistent) aksiom, så bliver systemet ikke fuldstændigt af den grund, thi der vil være nye sætninger systemet ikke kan bevise.

Church og Turing viste uafhængigt af hinanden (og stort set samtidigt, hvor Church dog publicerede først hvis jeg husker rigtigt) at sådan en algoritme ikke findes. Turings bevis er en reduktion fra halting-problemet. Altså, hvis sådan en algoritme findes, så kan den også løse halting problemet. Og da dette ikke kan lade sig gøre på en Turing-maskine, så findes algoritmen heller ikke. Church bevis går på at det ikke kan lade sig gøre at vise om et givent lambda-udtryk har en bestemt normalform, og dette er ækvivalent med Turing's arbejde.

  • 1
  • 0
#8 Martin Dahl

@Jesper tak for den fine udredning, jeg troede fejlagtigt at KGs 1. var tilstrækkeligt til at afvise bevisbarhed, når den i virkeligheden kun udtaler sig om ikke-bevisbare propositioner.

Så KG udforder Hilberts 2. problem 1931 og KGs aritmetrisering tillader Turing at afvise Hilberts Entscheidungsproblem i 1936.

  • 0
  • 0
 
#11 Kim Gjøl

Sådan var det i hvert fald i mange år med beviset for, at 4 farver er nok til ehvert landkort (uden at to farver støder op til hinanden), en ide første gang fremsat i 1852 af Francis Guthrie. Man kunne ikke finde kort, hvor 4 farver ikke var nok, men manglede beviset for, at det var sandt. Det blev endeligt bevist 1977 af Kenneth Appel og Wolfgang Haken, to matematikere på University of Illinois, ved hjælp af en computeralgoritme, hvor de først reducerede uendeligt store kort til ca. 2000 felter og så fik computeren til at regne alle muligheder igennem. Se mere her: http://videnskab.dk/blog/kan-man-stole-pa-en-computer - hvor der også er en interessant diskussion om, hvorvidt man så kan stole på, at et meget stort antal gennemregninger på en computer kan anses for et endeligt bevis. Under alle omstændigheder blev beviset accepteret. Tilsvarende med Fermats teorem. Mere her: http://da.wikipedia.org/wiki/Fermats_sidste_s%C3%A6tning

  • 1
  • 0
#13 Kim Gjøl

Præcis :-) Men man kan sige, at en del af de store fysiske spørgsmål er eksempler. Der findes sort stof, vistnok. Man kan samordne relativitetsteorien og kvanteteorien, vistnok. Men ellers: 10 matematiske ubeviste antagelser her: http://www.claymath.org/millennium-problems . Og 10 tilsvarende fysiske problemer her: http://infolink2003.elbo.dk/Naturvidenskab/dokumenter/doc/8340.pdf .

  • 0
  • 0
#14 Peter Stricker

Men ellers: 10 matematiske ubeviste antagelser her

Jo, men de antagelser er jo (pånær en) kun ubeviste. De er ikke bevist ubeviselige, hvilket er det, Aage efterspørger. Altså en formodning (conjecture), der bevisligt aldrig bliver til en sætning (theorem). Og som Aage skriver, så er hverken firefarveproblemet og Fermat's sidste sætning eksempler på dette, da de begge har gennemgået denne transformation.

Hvis du kan bevise, at et af de 6 tilbageværende af de 7 (!) Millennium Problems ikke kan løses, så har du fundet et eksempel.

  • 0
  • 0
#15 Kim Gjøl

OK, vi leder efter bevisligt ubeviselige sandheder. Ifølge den østrigske matematiker Gödel er der uendligt mange af dem. Han beviste, at "i ethvert modsigelsesfrit matematisk system, der kan regne med hele tal, findes der sande matematiske sætninger, der ikke kan bevises". Altså et bevis for, at der findes sandheder, som er ubeviselige.

  • 0
  • 0
#17 Kim Gjøl

Et eksempel er følgende: Der er uendeligt mange rationelle tal. En del af dem, men ikke alle, er heltal. Intuitivt må vi sige, at der er flere rationale tal end heltal. Det kan bevises, at der er lige mange heltal og rationale tal*. Altså: x:Der er flere rationale tal end heltal. Det kan ikke bevises, at der er flere rationale tal end heltal. *Man kan tilskrive hvert rationalt tal et heltal. Altså er der lige mange.

  • 0
  • 0
#19 Jesper Louis Andersen

Findes der et eksempel paa et matematisk udsagn, der anses for sandt, men det kan bevises, at det ikke kan bevises?

Problemet er at du godt vil have et bevis for et bevis. Det er ikke noget som standardmatematik opererer med. Der kigges naturligvis på sådanne systemer, men det er formentlig langt ud over hvad man normalt forbinder med matematik. Desuden ryger du hurtigt i uendelige kæder: hvad med et system for beviser for beviser for beviser. Og så fremdeles.

Gödel viste kun eksistensen af de sande men ubeviselige sætninger. Hans bevis er så at sige "ikke konstruktivt nok" til at det kan bruges til at konstruere en sådan sætning. Desuden er sådanne konstruerede sætninger formentlig volapyk og det er slet ikke sikkert nogen af de interessante sætninger er i blandt disse.

Hvad værre er at Church/Turing's arbejde viser vi end ikke kan bruge en computer til at finde dem, systematisk. Vi kan ikke bare tage Goldbach's formodning, og et aksiomatisk system som ZFC, kaste det efter computeren og så vente. Det betyder også at enhver metode der kan implementeres via en computer er "ude", så du skal have fat i noget bevisteori der ikke lader sig beregne systematisk på en maskine.

Goldbach's formodning i iøvrigt en af dem hvor visse matematikere kraftigt overvejer om det overhovedet er beviseligt indenfor gængs matematik. Men det vides naturligvis ikke med sikkerhed. Enten forbliver formodningen et mysterium for evigt, eller også hænder det i morgen at en matematiker har brudt gåden.

  • 0
  • 0
#22 Marc Barnholdt

Hej Aage

Der findes sætninger, som er ubeviselige. Et eksempel er continium hypotesen. Men da vi ikke kan bevise den, kan vi aldrig vide om den er sand. Så der kan ikke konstrueres et eksempel, for ved vi den er sand, er den jo også bevist.

  • 1
  • 0
#25 Marc Barnholdt

Hej Aage

Umiddelbart vil jeg henvise dig til http://en.wikipedia.org/wiki/Continuum_hypothesis - det er sådan, at med de regler matematik følger (aksiomer) kaldet ZFC, så kan den ikke bevises. Hele diskussionen følger jo af, at man i grunden kan vælge et hvilket som helst sæt aksiomer som udgangspunkt og så vil der være sande sætninger der ikke kan bevises. Inden for det sæt aksiomer vi i dag arbejder med inden for matematik, er der så eksemplet med overfornævnte hypotese, som ikke kan bevises. Og som nævnt tidligere - vi ved ikke om den er sand, blot at den ikke kan bevises. For viste vi den var sand, var den også bevist.

Vælges der et andet sæt aksiomer, vil der være andre hypoteser der ikke kan bevises og der vil ligeledes være andre hypoteser som også er sande og ikke kan bevises.

  • 0
  • 0
Log ind eller Opret konto for at kommentere