Papert’s death this week crossed my timeline in Maha Bali’s blog post and Audrey Watters’ newsletter No.173. I wondered when I had first heard of Seymour Papert. I remember studying Piaget’s work on my PG Certificate of Education course at the University of Liverpool in 1987/88 but can’t recall studying the work of Papert, his protege, at that time. Of course , I have read some of his work since then.
Then I remembered my first practical encounter with his work that is such a good example of what Audrey Watters struggles for, in Seymour Papert’s memory,
And I am committed to fighting for a world in which technologies – educational and otherwise – are not about enforcing control and compliance Newsletter No.173
I was a mother of three young children working part-time at St Alban’s College, teaching whatever classes came my way. The most interesting of these was a Computing class for adults with what were then called Special Needs. What I did was up to me, and I tried to tailor the activities to the individuals in the class. The youngest student had cerebral palsy, with poor motor skills. He could use the BBC Micro computer with an input device, provided by our local SEMERC (Special Education Micro Electronic Resource Centre). Sadly, funding for SEMERCs was removed in 1989 (thanks Maggie) and so this was a moment for me and the young man when we found the turtle robot in a cupboard and tried to work out what we could do with it. He soon learned to programme in Logo and I will never forget the expression of joy on his face when the turtle moved around the floor under his control. It dawned on me that the experience of pushing a toy car or train around the floor that my able-bodied children had from when they could crawl had probably never been available to this young man until that moment.
I was the one learning that day, and it transformed my view of the possibilities of technology, still largely unfulfilled.
These are some very half-formed thoughts that I want to capture so I can re-connect and learn more later.
I have just read a post by Laura Goglia about an experience she had in school from reading part of a textbook when she was supposed to be reading Beowulf. She saw a picture that fascinated her so much that she learned about its purpose and its location (Prado) , and yesterday she brought her children to see it and told them her story. Reading her post reminded me of a related experience I had when visiting Manchester City Art Gallery as student over forty years ago. I saw a picture that I interpreted in a rather disturbing way (but that’s another story) and only recently did I discover the artist’s ideas behind the painting. Maybe if we had the Internet and smart phones in the 1970s I would have learned that at the time but I was too lazy to check it out in the library. What our stories have in common is a visual impression that stuck in our minds: Laura went across time from the text to seeing the actual painting and I went in the opposite direction.
The connection that brought me to Laura’s post was a Twitter conversation about whether or not teaching/learning needs to be entertaining. I was intrigued by the Twitter exchange, and understood more about what Laura meant after I had read her blog post. I am still pondering though. We talk a lot about teaching and learning – sometimes meaning the activity and sometimes meaning the outcome but that’s the English language for you.
Whilst I was at Laura’s blog, I read her previous post about the questions she needed to answer at a mid term PhD assessment. Just after I read “Connectedness is the state of being able to recognize, understand, and act on connections across content, people, space, and time”, I glanced up the screen and noticed the Google cookie message. It offered me the chance to “learn more” and wanted me to click “got it”. “Pfft” I thought – it’s Google that wants to learn more about me so that it can connect ads to me across time and multiple locations. That sort of fitted with Laura’s sentence that I had just read but then I re-read Laura’s questions and realised that her concern was with human learning. Her story in the first post was about learning something outside of the lesson plan, and that seems uplifting and ever more possible when learners have access to the Internet to connect to people and ‘content’.
So is ‘connection’ an unequivocal good in human learning? and in machine learning?
Google (like other well-connected technology services) learns more about me every time I use it. Its active listening doesn’t seem quite as benign as the active listening of a human teacher in class. I don’t know if Laura’s teacher was aware of her distraction from Beowulf and was just happy that she was learning something. To learn about the painting, Laura had to disconnect from Beowulf.
Good teachers set up learning activities that offer a variety of possibilities and don’t seek a uniform outcome for all learners, for learning to be in one place or time (unless there are problems, they don’t go home or into the playground with their students).
Google’s knowledge is simultaneously partial and pervasive, following me around and trying to influence what I do. The difference between Google and the ‘good teacher’ is that Google wants to sell ads and demonstrate its influence on my purchasing (so it can sell more ads), whereas the ‘good teacher’ wants us to learn more than they want to teach. They want to help the learner shift the locus of control from the teacher to the learner.
So I am thinking that one of the things that good teachers will be concerned with is how they can help learners (partially) disconnect from services – how they can shift the locus of control from Google to themselves. And that is not easy.