{"id":1915,"date":"2013-10-16T00:00:00","date_gmt":"2013-10-16T04:00:00","guid":{"rendered":"http:\/\/aurora-institute.org\/blog\/susan-lowes-learning-to-learn-guest-blogger\/"},"modified":"2019-12-16T12:54:12","modified_gmt":"2019-12-16T17:54:12","slug":"susan-lowes-learning-to-learn-guest-blogger","status":"publish","type":"post","link":"https:\/\/aurora-institute.org\/blog\/susan-lowes-learning-to-learn-guest-blogger\/","title":{"rendered":"Susan Lowes: Learning to Learn (Guest Blogger)"},"content":{"rendered":"

Today\u2019s post is by guest blogger, Susan Lowes, who is the Director of Research and Evaluation at the Institute for Learning Technologies, Teachers College, Columbia University. Enjoy her post!<\/p>\n

 <\/p>\n

LEARNING TO LEARN ONLINE: <\/strong>
\nA WORK IN PROGRESS IN HELPING STUDENTS TO LEARN SELF-REGULATION<\/strong><\/p>\n

Susan Lowes, Ph.D.
\nInstitute for Learning Technologies
\nTeachers College\/Columbia University<\/p>\n

In the early days of online learning at the K-12 level, one of the biggest concerns was the high rate of attrition. My particular interest is in what I call virtual classrooms\u2014where the courses are paced by week, most communication is asynchronous, and there is a heavy emphasis on student-student communication. Virtual classrooms saw a lot of attrition, from students dropping out to students slowly fading away to students falling so far behind they couldn\u2019t catch up. This made the courses harder to teach and harder to participate in. As an evaluator researching the effectiveness of these courses, I saw many complaints from students who posted to discussion forums but never got answers or from participants in group work who waited forever for fellow group members who never showed up.<\/p>\n

At the time, there was a great deal of discussion of screening, including not only how to screen but who to screen in or out. This led to the development of predictive instruments, such as Peggy Roblyer\u2019s ESPRI, and to self-paced \u201corientation\u201d modules that asked students to reflect on their own ability to self-manage their learning. One issue with this type of orientation is that it is voluntary so it may well be the students who need it least who actually complete it. Another issue is that they really don\u2019t replicate the experience of an online course and it is difficult for students who have never been in one to imagine how the demands will affect them.<\/p>\n

I had two issues with the notion of screening, one philosophical and one from my own experience surveying students in these kinds of courses. The philosophical one was that many of us who embarked on online learning did so because we hoped it would open up opportunities for learning to those who had previously, for whatever reason, been excluded. (These reasons could range from the lack of availability of higher level courses, or courses in specific subjects, to discomfort with face-to-face classrooms.) However, the screening instruments suggested that certain behaviors, in particular the ability to manage your own time, were necessary to be successful online learners. If this was the case, then many would be excluded.<\/p>\n

The big surprise for me, though, was when I found that a huge majority of students who were asked to list the greatest benefit of taking an online course wrote that it had helped them to learn to manage their time well. In other words, they had to have had the opportunity to take the course if they were to learn this \u201csoft\u201d skill. If we had only allowed well self-regulated students into the course in the first place, the attrition rate might have been lower but this skill would not have been learned.<\/p>\n

I, and many others, became convinced that rather than screening students out, we had to change our focus and figure out ways to keep students in. Although Peggy Roblyer hoped that instruments like hers could be used to identify where help was needed, in practice very few schools had the time to do this type of analysis. However, some schools, including those I studied, began to put considerable effort into supporting students, in particular by having someone on site to monitor student progress. This focus on the student external support system helped a lot, but the more I thought about it, the more I thought that we also needed to focus on the student\u2019s internal environment. The bottom line is that students need to learn how to learn online. I felt there was too much of an assumption that this simply happens by osmosis.<\/p>\n

There are a lot of aspects to learning online, from learning to read on the screen to learning how to communicate clearly in a discussion forum, but experience told us that a fundamental aspect is what psychologists call \u201cself-regulation\u201d\u2014in education, the ability to take control of and evaluate your own learning.<\/p>\n

There is a fairly extensive literature on self-regulation, much of it from a few decades ago, and one of the most tested approaches was developed by Simon Rotter in 1966 and called the \u201clocus of control.\u201d Locus of control is based in a social learning theory that posits that individuals who feel that they can control their own environment are likely to adapt more easily to new situations and new environments than those who feel that they are controlled by forces outside of their control. Locus of control scores are on a continuum, from high internal to high external. Those who feel very much in control of what happens to them are said to have a high internal<\/b> locus of control while those who feel what happens to them is controlled by forces outside of themselves are said to have a high external<\/b> locus of control.<\/p>\n

It seemed likely that the concept of locus of control could be useful for assessing students who were being asked to adjust a new type of learning in an unfamiliar virtual environment. In addition, it seemed possible that locus of control scores could be not only be used as a diagnostic, identifying students who need help learning to learn online, but could also help students learn to learn online by providing them with an opportunity to reflect on their own learning. Rotter\u2019s locus of control instrument asks the respondents a series of questions to see if they perceive certain actions or events to be more influenced by their personal decisions and choices (which would be an indication of an internal locus of control) or by forces beyond their control (an indications of an external locus of control).<\/p>\n

In research (or my research anyway), serendipity often plays a role. The first serendipitous moment was in Spring 2012, when I happened to come across an article on self-regulation that discussed Rotter\u2019s work. It occurred to me that his instrument could be used to show students where they were on the locus of control continuum and then give them an opportunity to reflect on the result. I have been evaluating the online courses created and delivered by Pamoja Education for the International Baccalaureate for a number of years and I proposed this at a planning meeting in Summer 2012. The second serendipitous moment came when I found that one of the Pamoja Education staff had recently heard a program on Rotter on BBC4\u2019s Mind Changers series (http:\/\/www.bbc.co.uk\/programmes\/b01gf5sr<\/a>) and loved it, so he enthusiastically embraced the idea, as did their Faculty Advisor, who also taught the online Psychology course.<\/p>\n

The result was that in September 2012, we asked all incoming Pamoja students to take the locus of control quiz and think about their scores, which we emphasized were not set in stone but could change. We also noted that it seemed likely that having a high internal locus of control would be an asset in taking online courses but not necessarily for other aspects of life.<\/p>\n

We had interesting findings, some of which were contrary to what we expected and all of which will need to be tested in the coming year with more complete data sets. For example, we found that students in Mathematics had a statistically significant different Locus of Control scores than students in Psychology (p<\/i> = .03), with the Mathematics students having higher internal scores, but that there were not statistically significant differences among the other 10 courses. We had expected to find that students who subsequently dropped their courses would have higher scores (more external), but this was not the case. This may have been because the drop group was small or it may be because the drop group was exhibiting self-regulating in making the decision to drop. We found that those with lower comfort levels with computers had higher scores (higher external) (p<\/i> = .053), perhaps feeling overwhelmed by the technology demands of an online course, but we also found that those who met often with their site-based coordinators tended to have higher scores than those who met infrequently with their site-based coordinators. As with the drops, this may have been because those who met less frequently were already more self-regulated and did not need to meet so often. Similarly, those who stated that one of their concerns was time management had lower scores (higher internal) than those who did not have this concern, which suggests that having that concern is a necessary step in self-regulation.<\/p>\n

We asked them to do the same thing later in the year, but this time we asked them to reflect on the following three questions in their course blogs:<\/p>\n