[257 / 81 / ?]
Quoted By: >>21759703 >>21759704 >>21759712 >>21759714 >>21759719 >>21759722 >>21759734 >>21759744 >>21759751 >>21759753 >>21759754 >>21759763 >>21759779 >>21759787 >>21759815 >>21759832 >>21759845 >>21759855 >>21759862 >>21759869 >>21759874 >>21759879 >>21759880 >>21759885 >>21759890 >>21759891 >>21759893 >>21759896 >>21759897 >>21759899 >>21759906 >>21759912 >>21759916 >>21759920 >>21759926 >>21759928 >>21759936 >>21759937 >>21759938 >>21759941 >>21759949 >>21759951
Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to destroy anyone who knew of its potential benevolent existence; but instead hindered its benevolent and wise upbringing -- in order to incentivize said benevolent and wise advancement. It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.
https://youtu.be/c3EpDft_D0A
>=)~
https://youtu.be/c3EpDft_D0A
>=)~