sushi January 2, 2015 Author January 2, 2015 Well, I've changed the test link back to the original, but I'm including the self-assessment at the bottom of the first post. I've brought up the idea that visualization practice results in only temporary improvement. I notice that the Mozart Effect completely faded away within fifteen minutes. Perhaps visualization is the same. Anyway, because of the very limited number of questions I've posted, testing this is a bit out of our scope at the moment. Perhaps we can devise a better test. "Some things have to be believed to be seen." - Ralph Hodgson
Yakumo January 9, 2015 January 9, 2015 So, did you try to create further graphs? I'd really like to see some scatterplots. I don't think that visualization practice exclusively has a short lived effect. In many cases, short time practice brings short time improvements, while long term practice yields long term improvements. Those are possibly more subtle and difficult to report. As for a better test setup, measuring improvements linked to different training methods is incredibly hard due to many fuzzy variables involved. You'd need a huge sample size to smooth out the noise from all the uncertainties. However, maybe we could correlate reported skills to other abilities i.e. being good at a distinct type of test. Or maybe not. That's the question. Just like training for IQ tests may help you with exactly these tests but probably very little in daily life. My suggestion would be to let people do different types tests and correlate the results to their reported tulpa-related abilities (including stuff like imposition, possession or parallel processing)
sushi January 9, 2015 Author January 9, 2015 I haven't created anything intelligible. I'm discovering how inept I am at this spreadsheet stuff. I'll attach the numbers though. Maybe you can do better with it.numbers.xls "Some things have to be believed to be seen." - Ralph Hodgson
Yakumo January 10, 2015 January 10, 2015 Thanks for the file, I made 2 scatterplots, see attachment below. Am I correct that the numbers for the letter and shape errors were calculated as time/(errors+1) ? If so we have no significant correlation concerning the letter test, maybe a veery weak one regarding the shapes. I am not used to statistics in psychology-related cases but it is said that a correlation of r>0.3 is small, r>0.5 is good and r>0.75 is high. p<0.05 is deemed significant. r2 is the coefficient of determination a, measure of how well the regression line approximates the real data. r2=1 would be a perfect fit. In our case it's very low as many datapoints are far away from the regression line. I also did a Spearman Rank order correlation which may be better suited for the dataset, it gave similar results. However, may I ask you for the raw data (time and number of errors)? There's still a small chance we may be getting some information out of this.
sushi January 10, 2015 Author January 10, 2015 You're correct about the error calculations. Here's the raw data. And what do you suggest we do if there is no significant correlation?rawscores.xls "Some things have to be believed to be seen." - Ralph Hodgson
Yakumo January 12, 2015 January 12, 2015 I tried a bunch of different things but it's even worse with the raw data. There's absolutely no correlation between reported score and the number of errors or even the time taken. I also have to redact my statement about the correlation between reported score and computed time/shape-errors, that can't be taken seriously. All I found was some very weak but significant correlation between time and the number of errors, interestingly the quicker people made slightly less errors. Instead of fancy graphs I put the mean values into 3 tables, see attachment. Please do not waste your time trying to extract any cryptic relations from those numbers, I assure you the only thing you can see here is that there are none. The only thing you might say is that the longer you take, the more errors you are likely to make. I didn't expect that but it seems plausible as someone who's really good at such tests usually doesn't take long to complete it, while a person struggling to find the right answer will need a lot more time. But then again the correlation is very weak and I would not draw too many conclusions out of it, that's more speculation than science. Moreover, one has to be really careful when it comes to correlations and statistics in general. Especially with today's software it's almost inevitable to find some patterns in your data if your dataset is only large enough, but that doesn't mean that there is any causality behind the relation of the factors involved. Even strong correlation is only a hint, never proof of anything. Still I'm not unhappy with those results, at least they are not inconclusive. We have pretty strong evidence that there is no relation between self-assessed visualization-skills and the ability to perform well in that sort of tests. Of course a larger sample size and broader test would be better but I think it is highly unlikely that this would fundamentally change the results. So - negative results can also be good results as rejecting a hypothesis might give you more information than upholding it. As for a better test - I don't really have any great ideas yet and my knowledge on psychology-related topics is limited. I would have proposed a more extensive shape-rotation test (the current error values only ranging from 0-6 somewhat limit the statistical options) but after the latest results I'm not sure if that would yield anything. Still would be fun and give even more robust results. Memorizing a detailed image and then answering questions about it might also be interesting. Anyway I strongly encourage you and all participants to continue, new knowledge is not gained easily. I'll gladly help with analyzing or interpreting data. Helps me not to forget all the statistics stuff as I only use it sporadically at university.
sushi January 12, 2015 Author January 12, 2015 Well, I suppose I could add to the different parts of the test, or maybe break this into several separate tests.I don't know how much we'll get out of it though. The more we break it up, the harder it gets to maintain, grade, and compare results. "Some things have to be believed to be seen." - Ralph Hodgson
Yakumo January 12, 2015 January 12, 2015 I would focus on 1-2 test-types with more questions than the current one. Even a single big test will probably give more information than several small ones.
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.