Comment author: I_D_Sparse 21 March 2017 09:57:37PM 0 points [-]

First comes some gene A which is simple, but at least a little useful on its own, so that A increases to universality in the gene pool. Now along comes gene B, which is only useful in the presence of A, but A is reliably present in the gene pool, so there's a reliable selection pressure in favor of B. Now a modified version of A* arises, which depends on B, but doesn't break B's dependency on A/A. Then along comes C, which depends on A and B, and B, which depends on A and C.

Can anybody point me to some specific examples of this type of evolution? I'm a complete layman when it comes to biology, and this fascinates me. I'm having a bit of a hard time imagining such a process, though.

Comment author: mindreadings 21 March 2017 09:11:35PM 0 points [-]

I had no idea. I was just pointed to it recently from another list.

Comment author: Lumifer 21 March 2017 07:22:49PM 0 points [-]

You know you're replying to an 8-year-old thread, right?

Comment author: mindreadings 21 March 2017 07:03:31PM 0 points [-]

Good. The experiment is, however, very good evidence for the hypothesis that R.S. Marken is a crank, and explains the >quote from his farewell speech that didn't make sense to me before:

I can be a pretty cranky fellow but I think there might be better evidence of that than the model fitting effort you refer to. The "experiment" that you find to be poor evidence for PCT comes from a paper published in the journal Ergonomics that describes a control theory model that can be used as a framework for understanding the causes of error in skilled performance, such as writing prescriptions. The fit of the model to the error data in Table 1 is meant to show that such a control model can produce results that mimic some existing data on error rates (and without using more free parameters than data points; there are 4 free parameters and 4 data points; the fit of the model is, indeed, very good but not perfect).

But the point of the model fitting exercise was simply to show that the control model provides a plausible explanation of why errors in skilled performance might occur at particular (very low) rates. The model fitting exercise was not done to impress people with how well the control model fits the data relative to other models since, to my knowledge, there are no comparable models of error against which to compare the fit .As I said in the introduction to the paper, existing models of error (which are really just verbal descriptions of why error occurs) "tell us the factors that might lead to error, but they do not tell us why these factors produce an error only rarely."

So if it's the degree of fit to the data that you are looking for as evidence of the merits of PCT then this paper is not necessarily a good reference for that. Actually, a good example of the kind of fit to data you can get with PCT can be gleaned from doing one of the on-line control demos at my Mind Readings site, particularly the Tracking Task. When you become skilled at doing this task you will find that the correlation between the PCT model (called "Model" in graphic display at he end of each trial) and your behavior will be close to one. And this is achieved using a model with no free parameters at all; they are the parameters that have worked for many different individuals and they are now simply constants in the model.

OH, and if you are looking for examples of things PCT can do that other models can't do, try the Mind Reading demo, where the computer uses a methodology based on PCT, called the Test for the Controlled Variable, to tell which of three avatars -- all three of which are being moved by your mouse movements -- is the one being moved intentionally.

The fact that Marken was repeatedly told this, interpreted it to mean that others were jealous of his precision, and continued to produce experimental "results" of the same sort along with bold claims of their predictive power, makes him a crank.

I don't recall ever being told (by reviewers or other critics) that the goodness of fit of my (and my mentor Bill Powers') PCT models to data was a result of having more free parameters than data points. And had I ever been told that I would certainly not have thought it was because others were jealous of the precision of our results. And the main reason I have continued to produce experimental results -- available in my books Mind Readings, More Mind Readings and Doing Research on Purpose-- is not to make bold claims about the predictive power of the PCT model but to emphasize the point that PCT is a model of control, the process of consistently producing pre-selected results in a disturbance prone world. The precision of PCT comes only from the fact that it recognizes that behavior is not a caused result of input or a cognitively planed output but a process of control of input. So if I’m a crank, it’s not because I imagine that my model of behavior fits the data better than other models; it’s because I think my concept of what behavior is is better than other concepts of what behavior is.

I believe Richard Kennaway, who is on this blog, can attest to the fact that, while I may not be the sharpest crayon in the box, I’m not really a crank; at least, no more of a crank than the person who is responsible for all this PCT stuff, the late (great) William T. Powers.

I hope all the formatting comes out ok on this; I can't seem to find a way to preview it.

Best regards

Rick Marken

Comment author: Reeee 21 March 2017 05:01:19AM 0 points [-]

Just wanted to add it was a really thought provoking and fun read, by failure, I did not mean on the part of the author, it's his story, but on the part of humanity. Sorry to double post, probably won't see more from me, just found this a compelling read.

Comment author: Reeee 21 March 2017 02:56:18AM 0 points [-]

Now, I can't help but look at the normal ending as the preferable one. I would think along with the aesthetic design of the ships and quite possibly a merging of two races in the process, whether this has happened by this point in the story or not is not something I can guess at, but would be inevitable whether it has or not (or perhaps I misread something here and simple modification, and not outright merging, is actually all that took place)...

... I'd have to wonder what aspects of the babyeater nature and society that could be considered positive have been merged with the superhappies, such as a profound sense of tribal duty (arguably already existing in the superhappies, but more starkly expressed in the babyeaters), a very strong willingness to sacrifice one's own pleasure for the perceived good of the tribe and the whole (no more hiding from negative empathic emissions behind the superhappy confessors, well, quite as much), I'm sure there's more. At first glance, it looked to me like the superhappies basically ate their brains for their knowledge, but after a week of consideration, they would be just as much, no longer superhappies in the end.

What do they get from humans? Deception? Big beefy arms on the ship? I'm unable to say because I have difficulty separating my current perception of humanity from the evolved society in this one, but some constants stay true. Is it not a sort of evolution? A macrocosm of wanting to unite all people of differing perspectives and backgrounds for a shared goal, for the greater good of the whole? If you sat a human down next to our early ancestors, given the same backgrounds, would they be the same, or somehow different?

I know I'm far from the smartest person in the room, but the original ending seems to be a win and the true ending a failure. Blowing up the star and dooming all those people who had little to no say in the matter strikes me as more harmful and staggeringly less productive. The people in the first ending who commit suicide chose that for themselves, after choosing for their children, that was their decision entirely, based on a principle of what it means to be human, and not what it means to be a sentient being (which is why ending 1, imo, is less wrong than ending two, where a handful of people make that choice for everyone who could choose to opt out themselves, over their own opinion of what it means to be human). Just wanted to say my wrong-thinking piece because it's been nagging me for a week.

Comment author: hairyfigment 20 March 2017 06:32:33PM 0 points [-]

Yes, but as it happens that kind of difference is unnecessary in the abstract. Besides the point I mentioned earlier, you could have a logical set of assumptions for "self-hating arithmetic" that proves arithmetic contradicts itself.

Completely unnecessary details here.

Comment author: gjm 20 March 2017 03:36:43PM 0 points [-]

D'oh!

Comment author: Lumifer 20 March 2017 03:22:20PM *  2 points [-]

I think the current-day ZMD is talking to his past self (8 years and 10 months from the replied-to post).

Comment author: gjm 20 March 2017 03:12:04PM 0 points [-]

But you probably won't understand what I'm talking about for another eight years, ten months.

What do you expect to happen in January 2026, and why? (And why then?)

Also, are you the same person[1] as the "Z. M. Davis" you are replying to?

[1] Adopting the usual rather broad notion of "same person".

Comment author: I_D_Sparse 18 March 2017 08:56:42PM *  0 points [-]

Unfortunately, yes.

Comment author: SnowSage4444 18 March 2017 03:01:28PM 0 points [-]

No, really, what?

What "Different rules" could someone use to decide what to believe, besides "Because logic and science say so"? "Because my God said so"? "Because these tea leaves said so"?

Comment author: jkaufman 18 March 2017 02:38:50PM 0 points [-]

Running "1000 experiments" if you don't have to publish negative results, can mean just slicing data until you find something. Someone with a large data set can just do this 100% of the time.

A replication is more informative, because it's not subject to nearly as much "find something new and publish it" bias.

Comment author: I_D_Sparse 18 March 2017 12:50:32AM 0 points [-]

If someone uses different rules than you to decide what to believe, then things that you can prove using your rules won't necessarily be provable using their rules.

Comment author: I_D_Sparse 17 March 2017 07:31:58PM 1 point [-]

Yes, but the idea is that a proof within one axiomatic system does not constitute a proof within another.

In response to comment by gjm on Open Thread
Comment author: SnowSage4444 17 March 2017 06:57:39PM 0 points [-]

Thank you.

Comment author: Zack_M_Davis 16 March 2017 07:50:18PM *  1 point [-]

but the Science versus Bayescraft rhetoric is a disaster.

What's wrong with you? It's true that people who don't already have a reason to pay attention to Eliezer could point to this and say, "Ha! An anti-science crank! We should scorn him and laugh!", and it's true that being on the record saying things that look bad can be instrumentally detrimental towards achieving one's other goals.

But all human progress depends on someone having the guts to just do things that make sense or say things that are true in clear language even if it looks bad if your head is stuffed with the memetic detritus of the equilibrium of the crap that everyone else is already doing and saying. Eliezer doesn't need your marketing advice.

But you probably won't understand what I'm talking about for another eight years, ten months.

Comment author: snewmark 16 March 2017 03:04:30PM 0 points [-]

Oh, I wasn't aware that they had to be Bayesian for that rule to apply, thanks for the help.

In response to comment by SnowSage4444 on Open Thread
Comment author: gjm 15 March 2017 03:37:56PM 1 point [-]

If you mean you want to post the actual fanfic here: this is probably not the best place for fanfics; try fanfiction.net, perhaps?

If you mean you want to post the fanfic somewhere else: what do the instructions somewhere-else say?

If you mean the fanfic is already posted but you want to post a link to it here: easiest is probably to put a comment in the current media thread (called "March 2017 Media Thread"). To make a link, write something like this: [what you want displayed](url for link).

Comment author: Lumifer 15 March 2017 03:11:50PM 1 point [-]

It's a hint at Aumann's theorem.

View more: Next