Author Topic: RESPONSE TIME (RT) research in psychology is bullsh*t  (Read 3764 times)

0 Members and 1 Guest are viewing this topic.

Offline iLLucionist

  • * Elevated Elder
  • Thread Starter
  • Posts: 2735
  • Location: Netherlands
  • Topre is Love.
RESPONSE TIME (RT) research in psychology is bullsh*t
« on: Sat, 29 April 2017, 07:16:26 »
Fellow geekhackians you would understand. Please gimme mental support. So I work in psych department as PhD candidate, doin' my research and all.

So lot of psychologists do subliminal / subconscious research, in which response time research is an very often used paradigm. Lemme give example. Suppose I manipulate thirst by showing pictures of water. If you show "approach" behavior, you would be quicker to respond to drink-related words than non-drink related (neutral) words. These words are called "stimuli". So then you present participants with words, one by one, and people have to hit key as quickly as possible. Assumption then is that if you show approach-related tendencies to drink-related words, you press key faster than for neutral words.

EXHIBIT 1: Participant shows drink related approach tendencies:

drink: 0.1ms; water 0.2ms; car 2ms; cheese 3ms; beer 0.1ms

within person average: ~0.1ms for drink; ~2.5ms for neutral
conclusion: this pp shows approach-related tendencies for drinking --> manipulation induced thirst

Soo.. now the issue:

They use DirectRT  / e-prime software. They CLAIM sub-millisecond response time accuracy. BUUUT:

1) programmed in Visual Basic, no low-level drivers that leverage low latency timers in cpu, wtf #1

2) experiment PC's don't have realtime timer hardware, wtf #2

3) they use default keyboards with non-crossover, non-mech. just your regular dell included rubberdomes, wtf #3

In short: you cannot CLAIM that you record sub-millisecond response times right? Coz the hardware in between participant and PC (PEBKAC) is not sensitive to sub-millisecond differences.

Yet psychologists claim that they can make those inferences.

My colleagues don't understand. Please save me..
MJT2 Browns o-rings - HHKB White - ES-87 Smoke White Clears - 87UB 55g

Offline tp4tissue

  • * Destiny Supporter
  • Posts: 13565
  • Location: Official Geekhack Public Defender..
  • OmniExpert of: Rice, Top-Ramen, Ergodox, n Females
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #1 on: Sat, 29 April 2017, 07:23:00 »
i don't know the software,  but if it leverages the newer hpet,  sub ms should be possible through dithering.

Offline iLLucionist

  • * Elevated Elder
  • Thread Starter
  • Posts: 2735
  • Location: Netherlands
  • Topre is Love.
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #2 on: Sat, 29 April 2017, 07:49:24 »
i don't know the software,  but if it leverages the newer hpet,  sub ms should be possible through dithering.

but if the keyboard cannot provide ms response, nor can usb if i'm correctly, you cannot fix **** in software

they just use regular rubber domes with no key rollover and regular pcb / grid layouts where keypresses are shared in the matrix

you cannot fix crappy data
MJT2 Browns o-rings - HHKB White - ES-87 Smoke White Clears - 87UB 55g

Offline tp4tissue

  • * Destiny Supporter
  • Posts: 13565
  • Location: Official Geekhack Public Defender..
  • OmniExpert of: Rice, Top-Ramen, Ergodox, n Females
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #3 on: Sat, 29 April 2017, 07:56:51 »
i don't know the software,  but if it leverages the newer hpet,  sub ms should be possible through dithering.

but if the keyboard cannot provide ms response, nor can usb if i'm correctly, you cannot fix **** in software

they just use regular rubber domes with no key rollover and regular pcb / grid layouts where keypresses are shared in the matrix

you cannot fix crappy data

mmm... yes, that would be correct because of the way usb is polled you can not claim 1:1 response sub ms

you need a very dedicated hardware chain to do this..


The claim on the software box is probably something like 10-bit processsing on an 8-bit lcd panel..  it can process a 10bit lookup table on the inside, but it still dithers out to 8 bit.

Offline iLLucionist

  • * Elevated Elder
  • Thread Starter
  • Posts: 2735
  • Location: Netherlands
  • Topre is Love.
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #4 on: Sat, 29 April 2017, 07:59:03 »
i don't know the software,  but if it leverages the newer hpet,  sub ms should be possible through dithering.

but if the keyboard cannot provide ms response, nor can usb if i'm correctly, you cannot fix **** in software

they just use regular rubber domes with no key rollover and regular pcb / grid layouts where keypresses are shared in the matrix

you cannot fix crappy data

mmm... yes, that would be correct because of the way usb is polled you can not claim 1:1 response sub ms

you need a very dedicated hardware chain to do this..


The claim on the software box is probably something like 10-bit processsing on an 8-bit lcd panel..  it can process a 10bit lookup table on the inside, but it still dithers out to 8 bit.

Realize that A LOT of psychology research today is based on this fail-by-design technique. It renders A LOT of research COMPLETELY UTTERLY USELESS.

I would say that the sub-ms variation is just interference / current captured... error variation that is apparently systematic enough to be effect variation and not error variation.

But the variation that IS captured at the sub-ms is something else than people reaction.

Given the strong empirical tradition nowadays... a lot of experimental / cognition psychology research is rendered obsolete / wrong. think of the implications...
MJT2 Browns o-rings - HHKB White - ES-87 Smoke White Clears - 87UB 55g

Offline tp4tissue

  • * Destiny Supporter
  • Posts: 13565
  • Location: Official Geekhack Public Defender..
  • OmniExpert of: Rice, Top-Ramen, Ergodox, n Females
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #5 on: Sat, 29 April 2017, 08:08:56 »


Realize that A LOT of psychology research today is based on this fail-by-design technique. It renders A LOT of research COMPLETELY UTTERLY USELESS.

I would say that the sub-ms variation is just interference / current captured... error variation that is apparently systematic enough to be effect variation and not error variation.

But the variation that IS captured at the sub-ms is something else than people reaction.

Given the strong empirical tradition nowadays... a lot of experimental / cognition psychology research is rendered obsolete / wrong. think of the implications...


There are many things which impede humanity's welfare..


From politics to narrow-science, to short-sighted education of Drive and purpose to our children..



I don't think there is anything evil going on,  it's merely the best man can do as a species of detached nodes.



To maintain productivity we must slave large groups of humans to singular humans.. 

But this leadership can not possibly make the best decision because each singular human's scope of knowledge and current-information is too narrow.

The bandwidth of interhuman interaction is decidedly low..


All of this puts us in a very ineffective and overly miopic decision tree process..




Offline iLLucionist

  • * Elevated Elder
  • Thread Starter
  • Posts: 2735
  • Location: Netherlands
  • Topre is Love.
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #6 on: Sat, 29 April 2017, 08:21:14 »


Realize that A LOT of psychology research today is based on this fail-by-design technique. It renders A LOT of research COMPLETELY UTTERLY USELESS.

I would say that the sub-ms variation is just interference / current captured... error variation that is apparently systematic enough to be effect variation and not error variation.

But the variation that IS captured at the sub-ms is something else than people reaction.

Given the strong empirical tradition nowadays... a lot of experimental / cognition psychology research is rendered obsolete / wrong. think of the implications...


There are many things which impede humanity's welfare..


From politics to narrow-science, to short-sighted education of Drive and purpose to our children..



I don't think there is anything evil going on,  it's merely the best man can do as a species of detached nodes.



To maintain productivity we must slave large groups of humans to singular humans.. 

But this leadership can not possibly make the best decision because each singular human's scope of knowledge and current-information is too narrow.

The bandwidth of interhuman interaction is decidedly low..


All of this puts us in a very ineffective and overly miopic decision tree process..

If there is a god / creator, he/she made one big mistake: giving human a private self (the fact that you know what you think is private to yourself and cannot be observed by somebody else until you tell that other person how you think / feel).

This gave way to egocentricity, self-absorbedness, greed, and lying. Private motives, secret agendas.

The second fault is not giving mankind true empathy. You can only truly know what somebody else is going through if you witnessed / went through it yourself. so if you haven't had cancer, you don't know how the other truly feels, cannot put yourself in the other person's shoe. If this WAS possible, I think people would be more loving and kind to other people.

The third fault is shortsightedness. People are very bad in long-term planning. And that's where this ridiculous science comes from. "If only I have this dataset now. screw that it is hacked a bit, adjusted a bit here. It gives me this publication and then this awesome full professor promotion."

So then we end up with what I call the "third reality". The reality that science as a social institution creates that non-scientitsts often perceive as "truth". "It has been researched so it must has some merit to it." And then psychopaths manage to fake "truth" misusing academic institutions. And then we have generations to come who are naive, take it for granted that universities and researchers are all self-transcendent purely busy with "finding out the truth" (whatever that is) while in fact researchers are just people with jobs who want / need to make promotion to feed themselves.

And there we are... the downside spiral.
MJT2 Browns o-rings - HHKB White - ES-87 Smoke White Clears - 87UB 55g

Offline Spopepro

  • Posts: 229
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #7 on: Sat, 29 April 2017, 15:28:49 »
Yah. And let me guess--most of your colleagues don't have the technical skill or knowledge to follow or construct a proper experiment. Someone set it up like this once, published, and now it's in the literature as a method for everyone to copy without critique or understanding. It's even worse with statistical models and tests where researchers pass around R and SPSS scripts for turn-key data analysis because no one in the social sciences (and most in the life sciences (and sadly many in the physical sciences)) understands enough stats to fully design and understand their experiment. And then they scratch their heads over widespread reproducibility problems.

No... you understand this really, really well. As far as your last paragraph, I have a lot to say here, but I'm currently on my phone waiting for a student to show up for a make-up exam, and typing it all out would be mad painful. I'll come back with more whisky in me and a better keyboard...

Offline iLLucionist

  • * Elevated Elder
  • Thread Starter
  • Posts: 2735
  • Location: Netherlands
  • Topre is Love.
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #8 on: Sat, 29 April 2017, 15:35:59 »
Yah. And let me guess--most of your colleagues don't have the technical skill or knowledge to follow or construct a proper experiment. Someone set it up like this once, published, and now it's in the literature as a method for everyone to copy without critique or understanding. It's even worse with statistical models and tests where researchers pass around R and SPSS scripts for turn-key data analysis because no one in the social sciences (and most in the life sciences (and sadly many in the physical sciences)) understands enough stats to fully design and understand their experiment. And then they scratch their heads over widespread reproducibility problems.

No... you understand this really, really well. As far as your last paragraph, I have a lot to say here, but I'm currently on my phone waiting for a student to show up for a make-up exam, and typing it all out would be mad painful. I'll come back with more whisky in me and a better keyboard...

You are so goddamm right you wouldn't believe it. Fortunately, my psych education was really loaded with math and statistics, so I (think) know my stuff.

But pretty much all you say.

I'm curious what you have to say. I'll join you with some cognac :-P
MJT2 Browns o-rings - HHKB White - ES-87 Smoke White Clears - 87UB 55g

Offline tp4tissue

  • * Destiny Supporter
  • Posts: 13565
  • Location: Official Geekhack Public Defender..
  • OmniExpert of: Rice, Top-Ramen, Ergodox, n Females
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #9 on: Sat, 29 April 2017, 20:19:18 »
Yah. And let me guess--most of your colleagues don't have the technical skill or knowledge to follow or construct a proper experiment. Someone set it up like this once, published, and now it's in the literature as a method for everyone to copy without critique or understanding. It's even worse with statistical models and tests where researchers pass around R and SPSS scripts for turn-key data analysis because no one in the social sciences (and most in the life sciences (and sadly many in the physical sciences)) understands enough stats to fully design and understand their experiment. And then they scratch their heads over widespread reproducibility problems.

No... you understand this really, really well. As far as your last paragraph, I have a lot to say here, but I'm currently on my phone waiting for a student to show up for a make-up exam, and typing it all out would be mad painful. I'll come back with more whisky in me and a better keyboard...



The net was loosened to simply to produce more phd graduates..

then the university can point to them and be like, look at all these smart people we've produced..


Offline ErgoMacros

  • Posts: 313
  • Location: SF Bay Area
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #10 on: Sat, 29 April 2017, 23:25:29 »
OK, you like experiments... make one to test your set-up.

Create a simple timing circuit (with a 555 timer, or 2?) and set them to fire like this:
 

 timer #1  | _____/''''''\________ and repeat
 timer #2  | __________/''''''\____ and repeat
           +------a----b-----


a = time of first firing
b = time of 2nd firing

Each firing closes a key switch.

Determine the actual delta time between timer #1 and #2 with an oscilloscope.
Have the test software tell you the time difference from a to b.
See if it even comes close to the actual (oscilloscope) time.
See if it reports the same time for every pair of events (it should, or any jitter is noise you'll have to reduce your accuracy/resolution claims by.)
Today's quote: '...“but then the customer successfully broke that.”

Offline fanpeople

  • Posts: 970
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #11 on: Sun, 30 April 2017, 01:31:05 »
i like turtles.....


Offline iLLucionist

  • * Elevated Elder
  • Thread Starter
  • Posts: 2735
  • Location: Netherlands
  • Topre is Love.
Re: RESPONSE TIME (RT) research in psychology is bullsh*t
« Reply #12 on: Sun, 30 April 2017, 05:00:03 »
Yah. And let me guess--most of your colleagues don't have the technical skill or knowledge to follow or construct a proper experiment. Someone set it up like this once, published, and now it's in the literature as a method for everyone to copy without critique or understanding. It's even worse with statistical models and tests where researchers pass around R and SPSS scripts for turn-key data analysis because no one in the social sciences (and most in the life sciences (and sadly many in the physical sciences)) understands enough stats to fully design and understand their experiment. And then they scratch their heads over widespread reproducibility problems.

No... you understand this really, really well. As far as your last paragraph, I have a lot to say here, but I'm currently on my phone waiting for a student to show up for a make-up exam, and typing it all out would be mad painful. I'll come back with more whisky in me and a better keyboard...



The net was loosened to simply to produce more phd graduates..

then the university can point to them and be like, look at all these smart people we've produced..

It's like that but also different. You have to reason from the top, which consists of corporate psychopaths who are merely in it for money, status, and power, but in sicnece mostly the latter two (you really don't get rich of science mostly, exceptions apply).

Hiring incompetent PhD's who have no clue how science works, don't make it to the top. So the top is safe again. I knew I wanted to do research from day 1 as an undergrad. So I affiliated myself with people who would "transport" me into a science position. I always looked for mentors from whom I could learn the tricks of the field. That brought me very far and gave me a competitive edge. Now the people at the top in my field at my organization don't like me "coz I know my stuff". Everytime you are attacked for your "research content" it's merely a political game. Things like "Stay away from my position you threaten me" becomes things like "your sample size is too small" or "that idea is so straightforward why research it". You know same thing like suppressing citizens becomes "because of national security reasons".

Further, the top makes it increasingly different to get there. Picture this. Back in the 90's, top journals didn't had these very strong data requirements. So two things are going on here. First, of course the way experiments were designed and data was collected and analyzed wasn't stringent enough. Yes, it was too easy to exclude cases, to get away with small sample sizes, and not replicate your studies and get in top journals. That's all true. But now that is used as an authority argument for the top.

The people who are now the top are those people who got publications into the key top journals with such sloppy science, who are now telling the new generation that they suck and impose ridiculous data collection (empirical) requirements that nobody has the resources for. Win-win: those new generations don't get to the top, because the requirements are so stringent you won't get to the top because you won't get your top publications.

Finally, what this does is, the system makes psychopaths. If you want to reach the top, there is only one way: cheat. Fix your data. Remove cases. Create the perfect dataset. STILL top journals after all this fraud don't want complex statistical analyses. If your story isn't simply or straightforward you won't get in. Similarly, if you don't fit their political agenda (for instance you research theory X but theory Y is popular over there), you also won't get in.

All these "do good science with good data" is just window dressing for the top to keep themselves safe in their castle and keep everyone else out.

The days when "science was all about working on interesting things" are long gone. The culture in, at least, social science and economics (which is social science) is exactly the same as lawyers, politicians, and accountants. The currency is academic papers, positions are scarce, and it is one big competition.
MJT2 Browns o-rings - HHKB White - ES-87 Smoke White Clears - 87UB 55g