http://lesswrong.com/
http://lesswrong.com/ is a site mainly based on the blog posts of Eliezer Yudkowsky(and comments, discussion, wiki articles by him and others) on Rational reasoning.
"Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information."
Nah's impression:
- They seem to advocate probability based reasoning. EDIT: Although they advocate "rationality", their desires and ideals are similar to other Monotheistic cult (prediction based, waiting for a messiah, fear for the dooms day, etc). Basically, it's a church of techno-borg with rational skin, I'd say. It's so because they don't do deep enough motivation analysis and they are still looking for the absolute positive.
- They recommend various methods to enhance "map" so that it corresponds better with "territory".
- In short, they try to gain and maintain strong certainty in the form described as "beliefs" made by evaluating things according to past data, and also by constantly updating the data.
- They advocate 12 virtues. More ==> http://yudkowsky.net/rational/virtues
- On the side note, concepts are scattered among many blog posts that it's not very well organized for rapid information download (by readers' mind).
- "Mainstream' oriented, strongly scientific, not so open minded.
- A spin off from http://www.overcomingbias.com/about
- Lots of interesting perspectives, anyway.
- But I don't like the style. I mean, we have to read (sometime long) story to guess what this guy wants to say/convey. Is it "rational" to use this style for him? (And I don't find his stories very fascinating...)
- The main author is from the AI field. http://singinst.org/aboutus/ourmission/ "The Singularity is the technological creation of smarter-than-human intelligence."
http://singinst.org/ is interesting in many ways, too.
- Who is in? Who is supporting? (Both as individual and corporate, organization)
- Why the singularity (instead of multiplicity, diversity, etc)?
- The relation between (and http://www.overcomingbias.com/about )
- Overcoming Bias is made by Robin Dale Hanson who work at http://www.consensuspoint.com/
- He worked in AI industry, as well, and probably this is where they get acquainted.
- Maybe this page is most informative: http://hanson.gmu.edu/home.html
- Betting on prediction of future (as a sort of tool for brain storming, consensus building) http://hanson.gmu.edu/ideafutures.html
- Alternative institution =/= Interests in social control/engineering
- A smart kid, somehow very naive (in different ways).
- His interests: http://hanson.gmu.edu/TenureStatement.pdf "Why we disagree?" <== He wants agreements? Lack of agreements = his main (core) suffering?
- About disagreement : Are Disagreements Honest? http://www.gmu.edu/centers/publicchoice/faculty%20pages/Tyler/deceive.pdf
I guess it's coming from rationalist point of view in which evaluations are more or less mechanical/mathematical, once information and "rationalist beliefs" are known to both and verified.
- A story of disagreement between Robin and Eliezer ? http://lesswrong.com/lw/wl/true_sources_of_disagreement/ http://lesswrong.com/lw/wj/is_that_your_true_rejection/
- Transhumanism might be the issue. http://wiki.lesswrong.com/wiki/Transhumanism http://yudkowsky.net/singularity/simplified
- Eliezer ==> Life = always good, raising IQ always good (unless there is bad side effect, etc) ===> He has to save everyone and improve IQ of everyone and so on. ===> nobody dies and nobody is stupid ===> (subconsciously desiring) Utopia of no dying and no dumbness. So, he is naive, in a way, too.
- More about transhumanism http://www.transhumanism.org/resources/FAQv21.pdf | http://humanityplus.org/ (Formerly World transhumanist Association)
"Humanity+ is an international nonprofit membership organization which advocates the ethical use of technology to expand human capacities. We support the development of and access to new technologies that enable everyone to enjoy better minds, better bodies and better lives. In other words, we want people to be better than well."
Interesting ... an organization for "Borg". http://en.wikipedia.org/wiki/Transhumanism
Opponents = http://en.wikipedia.org/wiki/Francis_Fukuyama ,
http://amormundi.blogspot.com/2009/01/condensed-critique-of-transhumanism.html
- "Sequence" of "disagreement" ? Singurality debate?
- Robin's article http://spectrum.ieee.org/robotics/robotics-software/economics-of-the-singularity Predicts AI replace human in white collar job (as robot have done a bit in manual jobs, a nit, up to now)
- Robin's paper http://hanson.gmu.edu/longgrow.pdf
But he has been thinking about this for long time: his paper of 1998 http://hanson.gmu.edu/fastgrow.html
"Singurality" seems to be one of the sacred cow of transhumanists http://www.aleph.se/Trans/Global/Singularity/
- Robin's blog post http://www.overcomingbias.com/2008/06/economics-of-si.html
- Eliezer's post http://lesswrong.com/lw/rk/optimization_and_the_singularity/
- Robin's blog post http://www.overcomingbias.com/2008/06/eliezers-meta-l.html
- Eliezer's post http://lesswrong.com/lw/w2/observing_optimization/
- Eliezer's post http://lesswrong.com/lw/wj/is_that_your_true_rejection/
- It looks like things around "Singurality" is the issue, although I haven't read all, yet. I think these guys are a bit too smart to think that we are (including themselves) pretty stupid and insane. They are hoping a lot and in the wishful thinking they dream and idealize about singurality an other fetish concepts, which is fine and most of us do, anyway, but I don't think it's "rational" if they care.
- Robin is slightly less naive than I guessed (at least about the feasability of AI). http://www.overcomingbias.com/2008/05/roger-shank-ai.html
- But he IS very naive about social manipulation ... http://www.overcomingbias.com/2008/09/disagreement-is.html
He thinks it will take more time to build AI than some people in H+ think. http://metamagician3000.blogspot.com/2008/04/transhumanism-still-at-crossroads.html
He quoted "The overwhelming majority [of Transhumanists] might as well belong to a religious cargo cult based on the notion that self-modifying AI will have magical powers."
And said: "My co-blogger Eliezer and I agree on many things, but here we seem to disagree."
"Robin Hanson
September 5, 2008 at 6:47 pm | Reply
Alas, most of these comments show little awareness of the many other posts on disagreement I’ve made before on this blog. This suggests I get little cumulative gain from multiple blog posts – I must fit all my arguments on this subject into a single blog post, quit this subject, or quit blogging."
It shows he has (or had) wrong presumption that humans can think rationally and can learn, adequately well. And he held (or still hold) such presumption probably because he wants to change mind of others (towards whatever the ideal he desires).
Maybe he is a wanna be messiah of ecno-rational branding, just like many ideal seeking guys (of especially monotheistic culture).
Basically, I feel that he wants everyone (at least those who he can hope that they are rational) should agree with him. Thus, disagreement hurts him.