4 / Surface v deep
In the 1970s, Ference Marton and Roger Säljö of the University of Gothenburg in Sweden noticed that students took different approaches to learning. Some students focused on remembering information. Others focused on understanding it: connecting it to other information, figuring out its structure, when it might be useful, making predictions based on it, and so on. Marton and Säljö christened the former surface or shallow learning and the latter deep learning.
Reading the last paragraph, you may have already formed the opinion that surface learning is bad and deep learning is good. But that is not always true. Some essential knowledge just doesn’t have much depth to go after. For instance, the letter m makes the “mmm” sound. There simply is no conceptually deep understanding of this fact to be had. And almost all deep learning relies on knowledge of surface details. You can’t construct an argument integrating multiple causes of World War II if you can’t recall any of them.
Still, moving from being a beginner to an expert in any topic requires deep learning, so students who default to a surface approach will, sooner or later, have to be induced to go deeper. The task is made more difficult whenever we tell students they will be tested. Since most assessments operate at a surface level, the prospect of being tested can act as a signal to students that they need only memorize, not understand.
How do you push students to take a deep approach to a learning task? An effective method is to give them a goal that requires deeper understanding. For example, instead of the goal being to pass an assessment, help the student identify a project in which they are invested and which requires depth to complete.
Another approach is to lead the student gently into deeper waters. This is where tutoring comes in.
Take this example from the circulatory system study. A student and a tutor have just read the sentence, “If a substance can pass through a membrane, the membrane is permeable to it.”
S: So, it explains itself. If something is permeable to something, then that thing can pass through the other thing. T: So, how would the … S: And if it’s impermeable, it can’t. T: And how does that relate back to the capillary walls? S: Well, the capillary walls … T: Can you explain? S: Well, this is how I learned it. T: Uh huh. S: In the cell, it’s made up of these things, and then, it has these protein things, like this (draws a protein lying across a cell wall like a channel through the wall). They’re really, really big. And then, there’s a little substance like oxygen, and it can just go through here (pointing to the wall). But a big substance like sugar, which is tons of letters, has to go through the protein first. T: And how does, how does that relate to the cell membrane or the capillary? S: Well, if it’s too big—if something’s too big—to go into the capillary through the capillary membrane, it can’t, but then maybe, if it has protein, it can. Okay. T: Okay. S: Alright.
Twice the tutor asked “How does that relate …”—a move that is specifically designed to prompt a deep response (Chi’s scaffolding type #13). The tutor could have, instead, asked “What is passing through the cell membrane here [in the lungs]?” But even if the student had answered, correctly, “oxygen and carbon dioxide,” they would be retrieving information they had read earlier, a surface response.
This is precisely the difference between the “contentfree” prompts Chi tried to get tutors to use in the experiment we described earlier versus the “content-specific” prompts tutors used instead. The former push students towards deeper responses and so deeper learning.
Constructing deep prompts like this does not come naturally to tutors, as Chi found. Closed yes-or-no questions won’t do. Deep prompts typically begin with “how,” “why,” or “what if.” But the critical feature is it should not be possible to answer just by repeating some thing you’ve learned. A deep prompt presses the student to produce the meaning or implication of knowledge. For example, in the dialog above, the tutor asks the student to take the idea of permeability and apply it to the specific case of what is going on in the capillaries.
As it does here, a deep prompt will compel a student to do some cognitive heavy lifting. Often, that means fitting together multiple pieces of information. Some of those pieces the student will only have encountered recently, so the tutor can help by more scaffolding: serving up missing pieces of information but pressing the student to fit them together. The process of a student constructing or generating something they didn’t know before—or only sort of knew—has been shown to be a highly effective way of learning. But if the tutor does the fitting together on behalf of the student, the student will gain little. Whoever does the work does the learning.
So, one way to evaluate a tutoring session is to ask whether it pushed the tutee toward deep learning or whether it remained skating at the surface. Micki Chi’s study of circulatory system tutoring asked exactly that. When Chi looked at tutor moves and which resulted in surface or deep learning, there were three big takeaways.
First, tutor explanations led, at best, to surface responses from the student. In fact, most of the time, they didn’t lead to content-ful responses at all—just an “uh huh.”
Second, scaffolding mostly led to surface responses too and deep responses only occasionally. But, as we just saw, that is most likely due to the kind of scaffolding prompts that tutors used rather than being a problem intrinsic to scaffolding. In other words, scaffolding can lead to deep responses, but few of the tutors in Chi’s study knew how to do so.
Third, there was one move that was more likely than any other to lead to deep responses. We will meet it soon.