"…apart from this virulent racist who keeps talking about IQ…" "…and all these people who keep talking about being ‘Pick-Up Artists’…" "my God, this place needs to be burned down and the earth salted! HISTORIA DE LA EDUCACIÓN EN ESPAÑA Y AMÉRICA Cómo se forjó, a lo largo de más de 2.000 años, con la mutua influencia de interconexión de iberos más o menos romanizados, visigodos, musulmanes, judíos y cristianos, mozárabes y ... Roko observed that if two TDT or UDT agents with common knowledge of each other's source code are separated in time, the later agent can (seemingly) blackmail the earlier agent. For more background on open problems in decision theory, see the Decision Theory FAQ and "Toward Idealized Decision Theory". The usual refutation is the "many gods" argument:[64] Pascal focused unduly on the characteristics of one possible variety of god (a Christian god who punishes and rewards based on belief alone), ignoring other possibilities, such as a god who punishes those who feign belief Pascal-style in the hope of reward. Ambos asistieron al evento realizado en el Metropolitan Museum of Art y, más allá de saber posar para las cámaras -porque no supieron- o si iban ad hoc a la temática del evento –Heavenly Bodies: Fashion and Catholic Imagination-, hay algo que nadie le ha prestado atención… hasta ahora. [58] The bottom of this postimg, about the news coverage, is particularly hilarious as a memorial to burning the evidence. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL. Yudkowsky's solution to Newcomb-like paradoxes is Timeless Decision Theory (TDT). Charles Stross points out[81] that if the FAI is developed through recursive improvement of a seed AI, humans in our current form will have only a very indirect causal role on its eventual existence. "EL JUEGO MENTAL" así titulado. Yudkowsky's term for a hypothetical algorithm that could autonomously pursue human goals in a way compatible with moral progress is coherent extrapolated volition. CREEPYPASTAS Y VOID MEMES EN ESPAÑOL CON CONTEXTO - "EL NOVENO CIRCULO"¡Suscribete, es gratis!https://bit.ly/2IUNRwg Activa la campanita de notificaciones._. The core claim is that a hypothetical, but inevitable, singular ultimate superintelligence may punish those who fail to help it or help create it. A TDT or UDT agent, on the other hand, can recognize that Alice in effect has a copy of Bob's source code in her head (insofar as she is accurately modeling Bob), and that Alice's decision and Bob's decision are therefore correlated — the same as if two copies of the same source code were in a prisoner's dilemma. Michael Faraday (1791-1867) - G. Carmona y P. Goldstein / - Líneas físicas de fuerza : uno de los bebés de Faraday, ahijado de Maxwell / E. Ley-Koo / - Michael Faraday y la licuefacción de los gases / S.M.T de la selva / - La ley de ... that torturing the copy should feel the same to you as torturing the you that's here right now, that the copy can still be considered a copy of you when by definition it will experience something different from you, that if the AI can create any simulation that, that timeless decision theory is so obviously true that any Friendly superintelligence would immediately deduce and adopt it, as it would a correct theory in physics, that despite having been constructed specifically to solve particular weird edge cases, TDT is a good guide to normal decisions, that acausal trade is even a meaningful concept. Roko made the claim that the hypothetical AI agent would particularly target people who had thought about this argument, because they would have a better chance of mentally simulating the AI's source code. The Roko's basilisk incident suggests that information that is deemed dangerous or taboo is more likely to be spread rapidly. Again, I deleted that post not because I had decided that this thing probably presented a real hazard, but because I was afraid some unknown variant of it might, and because it seemed to me like the obvious General Procedure For Handling Things That Might Be Infohazards said you shouldn't post them to the Internet. So Alice's knowledge of Bob's source code makes Bob's future threat effective, even though Bob doesn't yet exist: if Alice is certain that Bob will someday exist, then mere knowledge of what Bob would do if he could get away with it seems to force Alice to comply with his hypothetical demands. [60], Finally, in October 2015, LessWrong lifted the ban on discussion of the basilisk[7] and put up an official LessWrong Wiki page discussing it.[61]. (It turns out you can't always reason your way out of things you did reason yourself into, either.) The probability of the particular AI in the basilisk is too tiny to think about. Real-world artificial intelligence development tends to use minimax — minimise the maximum loss in a worst-case scenario, which gives very different results from simple arithmetical utility maximisation, and is unlikely to lead to torture as the correct answer — or similar more elaborate algorithms. Thus, this left those seriously worried about the basilisk with greatly reduced access to arguments refuting the notion. Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn't work to bring the agent into existence. One single highly speculative scenario out of an astronomical number of diverse scenarios differs only infinitesimally from total absence of knowledge; after reading of Roko's basilisk you are, for all practical purposes, as ignorant of the motivations of future AIs as you were before. EL BASILISCO DE ROKO Advertencia: Si sufres de trastorno obsesivo compulsivo o algo parecido se recomienda dejar de leer ahora. And it has to predict that we will care what it does to its simulation of us. Commenters on Roko's post complained that merely reading Roko's words had increased the likelihood that the future AI would punish them — the line of reasoning was so compelling to them that they believed the AI (which would know they'd once read Roko's post) would now punish them even more for being aware of it and failing to donate all of their income to institutions devoted to the god-AI's development. Acá te decimos dónde lo encontramos, La angustia de crecer: La historia y el significado de “Stressed Out” de Twenty One Pilots, Pa’ maratonear: 5 películas geniales de anime de los últimos años que debes ver, Agárrense que ya dijo CONAGUA que llegará un temporal de lluvias y frío a México, MDZhB, la estación de radio que lleva más de 40 años transmitiendo un misterioso zumbido, ¿Y eso? Publicada cuando la electrónica digital estaba en su infancia, Yo, robot resultó ciertamente visionaria. Since there was no upside to being exposed to Roko's Basilisk, its probability of being true was irrelevant. De entrada te decimos que el basilisco es un ser mitológico creado por los griegos cuya descripción era una “pequeña serpiente cargada de veneno letal que puede matar con la simple mirada pero además, si lo veías por un espejo podías quedar petrificado”. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them. One way to generalize this point is to adopt the rule of thumb of behaving in whatever way is recommended by the most generally useful policy. If a superhuman agent is able to simulate you accurately, then their simulation will arrive at the above conclusion, telling them that it is not instrumentally useful to blackmail you. "FAI" here stands for "Friendly AI," a hypothetical superintelligent AI agent that can be trusted to autonomously promote desirable ends. Because their decisions take into account correlations that are not caused by either decision (though there is generally some common cause in the past), they can even cooperate if they are separated by large distances in space or time. If you do not subscribe to the theories that underlie Roko’s Basilisk and thus feel no temptation to bow down to your once and future evil machine overlord, then Roko’s Basilisk poses you no threat. Although they disclaim the basilisk itself, the long-term core contributors to LessWrong believe in a certain set of transhumanist notions which are the prerequisites it is built upon and which are advocated in the LessWrong Sequences,[13] written by Yudkowsky. A grandes rasgos, el experimento plantea un escenario -hipotético, claro- en el que el ser humano crea una poderosa máquina de inteligencia artificial con el fin de que trabaje en busca del bienestar para toda la humanidad. The argument was called a "basilisk" because merely hearing the argument would supposedly put you at risk of torture from this hypothetical agent — a basilisk in this context is any information that harms or endangers the people who hear it. The truth is that making something like this "work", in the sense of managing to think a thought that would actually give future superintelligences an incentive to hurt you, would require overcoming what seem to me like some pretty huge obstacles. Humans use a hardware-based human-emulator to simulate the actions of humans. In an extreme version of the prisoner's dilemma that draws out the strangeness of mutual defection, one can imagine that one is playing against an identical copy of oneself. He notes in the comments that he considers this reason to "change the current proposed FAI content from CEV to something that can't use negative incentives on x-risk reducers.". For the other player to be confident this will not happen in the Prisoner's Dilemma, for them to expect you not to sneakily defect anyway, they must have some very strong knowledge about you. Roko raised this point in the context of debates about the possible behaviors and motivations of advanced AI systems. The TDT paper does not present a worked-out version of TDT - the theory does not yet exist. If you look at the original SF story where the term "basilisk" was coined, it's about a mind-erasing image and the.... trolls, I guess, though the story predates modern trolling, who go around spraypainting the Basilisk on walls, using computer guidance so they don't know themselves what the Basilisk looks like, in hopes the Basilisk will erase some innocent mind, for the lulz. De manera general, el experimento plantea un escenario hipotético en el que el ser humano crea una poderosa máquina de . If a tool does not work given certain circumstances, it won't be used. If there is any part of this acausal trade that is positive-sum and actually worth doing, that is exactly the sort of thing you leave up to an FAI. Before you get to calculating the effect of a single very detailed, very improbable hypothesis, you need to make sure you've gone through the many much more probable hypotheses, which will have much greater effect. Which means that the correct strategy to avoid negative incentives is to ignore them. Communities with different goals and different demographics will plausibly vary in how 'normal' they should try to look, and in what the relevant kind of normality is. The core idea is expressed in the following paragraph: ... there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. 199-216, “Reading and Bargaining,” William Flesch, in The Oxford Handbook of Cognitive Literary Studies, ed. One take-away is that someone in possession of a serious information hazard should exercise caution in visibly censoring or suppressing it (cf. El Basilisco de Roko es conocido como el experimento mental más aterrador de todos los tiempos. After all, if Roko’s Basilisk were to see that this sort of blackmail gets you to help it come into existence, then it would, as a rational actor, blackmail you. Yudkowsky's interest in decision theory stems from his interest in the AI control problem: "If artificially intelligent systems someday come to surpass humans in intelligence, how can we specify safe goals for them to autonomously carry out, and how can we gain high confidence in the agents' reasoning and decision-making?" I did not want LessWrong.com to be a place where people were exposed to potential infohazards because somebody like me thought they were being clever about reasoning that they probably weren't infohazards. The 2016 LessWrong Diaspora Survey[62] asked: Have you ever felt any sort of anxiety about the Basilisk? Because Eliezer Yudkowsky founded Less Wrong and was one of the first bloggers on the site, AI theory and "acausal" decision theories — in particular, logical decision theories, which respect logical connections between agents' properties rather than just the causal effects they have on each other — have been repeatedly discussed on Less Wrong.