rationalist wank
Mar. 4th, 2011 11:48 ami've been reading Harry Potter and the Methods of Rationality, a collection of parables sugar-coated with fanfic. i love the meta commentary about both Harry Potter and Harry Potter fandom, i'm skeptical about some of the other thought behind the stories. anyway, it's a fun diversion if Harry Potter deciding that Ron Weasley is a waste of time makes you fist-pump (or that at one point Draco casts a spell called Gom Jabbar).
this led me in a roundabout way to a discussion of the parent site, Less Wrong...where there was rationalist wank.
Yudkowsky is interested in causality that goes backwards in time: future events causing past events by simulating what someone will do in the future and using this in the present, e.g. not giving a gun to someone you believe will shoot you. This gets odd when you imagine a super-human intelligence simulating a human level intelligence, because their predictions may be near perfect. Roko (a top contributor at the time) wondered if a future Friendly AI would punish people who didn't donate all they could to AI research. He reasoned that every day without AI, bad things happen (150,000+ people die every day, war is fought, millions go hungry) and a future Friendly AI would want to prevent this, so it might punish those who understood the importance of donating but didn't donate all they could. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them. That final thought proved too much for some LessWrong readers, who then had nightmares about being tortured for not donating enough to SIAI. Eliezer Yudkowsly replied to Roko's post calling him names and claiming that posting such things on an Internet forum could have caused incalculable harm to the future of humanity. Four hours later, Eliezer Yudkowsly deleted Roko's post including all comments. Roko left LessWrong, deleting his thousands of posts and comments, though he later returned.
One butthurt poster then protested this censorship with a threat to ... harm the future of humanity by posting things to an Internet forum. LessWrong then ... took this threat seriously. One shudders to think what the future Friendly AI will do when it finds 4chan.
i love the internet.
this led me in a roundabout way to a discussion of the parent site, Less Wrong...where there was rationalist wank.
Yudkowsky is interested in causality that goes backwards in time: future events causing past events by simulating what someone will do in the future and using this in the present, e.g. not giving a gun to someone you believe will shoot you. This gets odd when you imagine a super-human intelligence simulating a human level intelligence, because their predictions may be near perfect. Roko (a top contributor at the time) wondered if a future Friendly AI would punish people who didn't donate all they could to AI research. He reasoned that every day without AI, bad things happen (150,000+ people die every day, war is fought, millions go hungry) and a future Friendly AI would want to prevent this, so it might punish those who understood the importance of donating but didn't donate all they could. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them. That final thought proved too much for some LessWrong readers, who then had nightmares about being tortured for not donating enough to SIAI. Eliezer Yudkowsly replied to Roko's post calling him names and claiming that posting such things on an Internet forum could have caused incalculable harm to the future of humanity. Four hours later, Eliezer Yudkowsly deleted Roko's post including all comments. Roko left LessWrong, deleting his thousands of posts and comments, though he later returned.
One butthurt poster then protested this censorship with a threat to ... harm the future of humanity by posting things to an Internet forum. LessWrong then ... took this threat seriously. One shudders to think what the future Friendly AI will do when it finds 4chan.
i love the internet.