I don't NEED to differentiate

"Most of us know teachers who teach very successfully in a textbook-lecture, teacher centered style and have had terrific student achievement doing so." (Nunley, 2006, p. 103)
It's possible, right?  It is entirely possible that your classroom is full of auditory learners who don't mind listening to a bald guy talk while swapping slides.  It is entirely possible that all of your students are satisfied taking notes and look forward to college lecture halls in their not-so-distant future.  If memorizing and regurgitating facts are what you're looking for AND if students are satisfied AND if parents endorse this type of teaching AND if administration is kosher with the results, then keep doing what you're doing.  

If the above commentary describes your situation, keep doing what you're doing. 

Chances are pretty good that your classroom is different.  It's broken.  It needs fixed.  For your students' sake...you (and I) do NEED to differentiate. 

Any questions?

Placing the KABOSH on classroom management

"Many of us took a course in classroom management during our pre-service education.  If we really want to move education in the direction of student centeredness and encourage differentiation, then teacher education courses in classroom management should expire.  We need to replace them with 'classroom leadership' courses.  Strike the term 'classroom management' from education because the concept self-perpetuates the problem.  The more you manage others, the more they need to be managed.  You cannot manage people into being responsible, intrinsically motivated, cooperative people who strive to reach their own personal potential.  For that we need classroom leaders."
[Differentiating the High School Classroom, Nunley, p. 84]
In my first year or two of teaching, I struggled with control.  I didn't struggle in the stereotypical way of classroom management, but instead the exact opposite.  Perhaps my biggest downfall was thinking that I could control students for part of the class period and then they would magically figure the rest out on their own.  I had this picture in my mind that my think-alouds and rigorous math problems would someday yield critical thinkers and students who would naturally mature into being independent learners.  I was wrong. 

Kathie Nunley suggests that a teacher's desire for control in his/her classroom is one of eighteen obstacles to differentiating learning experiences within the classroom walls.  The more I think about it, the more I agree.  A few quick bullet points sum up my thinking:
  • My natural tendency is to AVOID FAILURE and create successful opportunities for my students. 
    Truth: Without failure, nothing changes.  Failure has the potential to encourage innovation and discovery.
  • My natural tendency is to create the SAME learning environment for ALL students.  It's a control issue.
    Truth: Providing students with several legitimate choices to learn/demonstrate the same learning target empowers students and increases motivation.
  • My natural tendency is to believe that a CONTROLLED class is WELL-RUN class.
    Truth: Learning, not control, is the currency of education.
Nunley sums up this obstacle well on p. 81.
"One of the main reasons many teachers resist differentiated approaches to teaching is they think it will cause them to lose control in their classrooms.  Teachers like control...Whether or not learning is occurring is beside the point when what matters most is the control they have over their students."
In an era of administrators conducting classroom walk-throughs, it is hard not to get caught up in the view from the window rather than carefully observing students conversations and productions.  When my students are discussing their quizzes in groups for five minutes and it seems like chaos, that's okay.  When my students are spending two minutes in a think-pair-share, that's okay.  When students are sitting around the room in groups, discussing their practice problems while sending individuals to the board to check their answers, only to find out they got them all wrong at first, that's okay.  If our focus is truly on learning, does it matter if our classrooms look messy and out-of-control to the uninformed eye?

I don't really know what it means to be a "classroom leader" but I do know that managing my students for several years wasn't working out so well.  Letting go and in turn encouraging more student discussion and reflection has yielded incredible results.  I'm slowly figuring out that success has less to do with managing students and more about pointing them in the right direction: learning.

Will you join me in placing the KABOSH on the term "classroom management" for the sake of our students?

I thought I was differentiating?

Carol Tomlinson writes,

"A class is not differentiated when assignments are the same for all learners and the adjustments consist of varying the level of difficulty of questions for certain students, grading some students harder than others, or letting students who finish early play games for enrichment.  It is not appropriate to have more advanced learners do extra math problems, extra book reports, or after completing their "regular" work be given extension assignments." [How to differentiate instruction in mixed ability classrooms, 1995, p. 9]
If you've heard about differentiation, you've probably come across some of Tomlinson's writing.  She's practically the differentiation guru.  I want to be the first to admit it publicly, if the description above is NOT differentiation, I have a lot to learn about this topic.   I know that differentiation happens through product, process and content, but if it doesn't involve extra math problems and extension assignments then my pre-service AND in-service education has been....how do I say this politely?....misleading!

I'm looking forward to reading the rest of Differentiating the High School Classroom by Kathie F. Nunley.  I received close to a dozen books for Christmas and this was not one of them.  It's checked out from the local area education agency's professional library and is due back in a few weeks, so I figured I had better read it before digging into the new stack.  Look for future posts related to differentiation as I learn from Nunley's book. 

Nearly every educator I know claims to differentiate in his/her classroom.  As I look to reshape my own definition of differentiation-in-action, I'd like to know what it looks like with your students.  Feel free to leave a comment below with your best differentiation thoughts and stories. 

Using neighborhood data to create statistical models

I am looking for more ways to encourage students to use data that is relevant and interesting to them in my Statistics & Discrete Math class.  In a recent project, students were asked to answer the question, "Is size a useful predictor of a house's assessed value?" Data from websites such as the Johnson County Assessor were made readily available for students to access and research the ins and outs of houses on their own streets and in their housing developments.  Additional resources were posted on my website for students to use as they created a second statistical model with Iowa City or Cedar Rapids (two cities within 20 minutes that are much larger than the town I teach in) houses as data points.




Students were asked to produce a 1-2 page double-spaced report summarizing their findings:
  • A brief summary of the data in each mini-study (local neighborhood and CR/IC).  This included a response to the following question using data, graphs, regression lines, significance tests and correlation coefficients to backup your opinion - "Is size a useful predictor of price in your neighborhood? ...and in Iowa City or Cedar Rapids?"
  • A prediction of the students' house's assessed value based on square feet using his/her neighborhood model and the Iowa City or Cedar Rapids model.  Does this match-up with the assessed value according to the websites?  Explain a few possible reasons why or why not.
  • Finally, include a paragraph explaining any possible errors observed in the way data was collected or the process in which this study took place
Scatterplots, regression line equations and prediction intervals using Microsoft Excel were the norm for each student.


This project parallels quadrant D learning as written in the 9-12 Data Analysis/Statistics & Probability Iowa Core Curriculum essential concepts.

(cross-posted at the SCSD Technology Corner blog)

Data-driven instruction?

I appreciate fellow math educator and edu-blogger David Cox and the way he consistently peppers me with questions about my assessment practices on this blog.  Most recently David asked:

"Do you offer a pre-test before you begin instruction on the learning target(s)? If so, what do you do with the student who aces the pre-test while you are going through the initial instruction?"
I replied to David briefly in the comments, but a proper explanation warrants a much more thorough response.  Nearly every educator I know is familiar with the idea of "using data to drive instruction."  I'd like to describe two experiments (let's call them "action research projects" to make them sound more professional) currently in the works in my classroom.


Using data to assess relative strengths and weaknesses of an entire class
At the beginning of the most recent unit of study, students took a thirteen question multiple choice "pre-test" to assess their current level of understanding of surface area and volume.  I provided all of the necessary formulas and attempted to choose two questions that correlated with each learning target.  While multiple choice questions are not the norm in my class and present some obvious issues with guessing correct answers, I decided to go this route to create a quick turnaround.  Because the pre-test was a new idea, I prefaced it to my students as an opportunity to find out what they already know and don't know.  I explained my reasoning for the pre-test: to assess what they already know and what they need more help understanding.  The worst case scenario from the students' perspective, I explained, was not taking the assessment seriously and in turn suffering through an entire unit full of ideas and concepts they already understood.  That incentive seemed to make sense to them, because I did not observe any dot patterns or hear any grumblings.

I have several ideas in mind for utilizing the pre-test data:

  1. Just as I explained to my students, if a learning target seems to be understood by many on the pre-test, I will spend very little time teaching this concept.  On the flip side, learning targets that students struggled with trigger a "make sure we spend lots of time discussing this learning target and any misconceptions" in my mind.

  2. I will be giving an alternate form (same questions, different order) to the students at the end of the unit, in conjunction with the usual open-ended summative assessment.  I will be teaching one class using the traditional textbook sequence: lateral & surface area of prisms, lateral & surface area of pyramids, volume of prisms, volume of pyramids, etc.  The second class will be taught using an approach I've always wanted to try: teach the difference between lateral area, surface area and volume followed by each formula in one 84 minute block.  The next few class periods will be spent working on many practice problems and identifying which formula to apply and any misconceptions that arise.  Using the pre-/post-test difference data between the two classes in conjunction with several other data points to-be-determined, I would like to find out which approach seems to be more effective.  Look for a future post describing the results.


Using data to differentiate instruction and assignments for individual students
My building has a goal to increase the computation scores of students as reported on standardized tests.  Our department selected Geometry as the class to focus on skills such as long division, dividing mixed numbers, multiplying decimals and simplifying radicals all without a calculator.  The system we came up with was to administer pre- and posts tests at the beginning and end of the course and focus on two skills each week.  Typically the skill remediation involved a five to ten minute re-teaching session during class followed by a self-scoring drill and kill worksheet (DaK for short).  At the end of each week a short six question quiz provided an interim picture of students' progress on the two skills.  As the semester progresses, the completion rate of the DaK worksheets almost always declines and the students who need the practice the most tend to be the ones who choose not to do it.  The system has the best of intentions but it just wasn't working.  Two weeks ago, I decided to give the quiz first.  Students who successfully demonstrated an understanding of the concepts were exempt from the DaK worksheet and their quiz score was recorded.  All other students were given the DaK and were only permitted to take the post-quiz if their DaK was completed.  New evidence of learning replaced old evidence in the grade book.  Students love this new idea so far!  It works for them, because an incentive exists to do well on the first quiz as well as the DaK worksheet, but only if a need exists to complete it.  It also works for me, because during the five or ten minute re-teaching session, students are hungry to learn based on their mediocre quiz scores.  Students who have already mastered the concept can be pinpointed as additional resources for struggling learners.  I admit that this system works, because the instructional time is limited and it only happens one time per week.  If I could figure out a way to generalize this idea of differentiation and manage classroom behavior all at once, I could probably write a book about it and retire.  :)

Are these examples of data-driven instruction?  How are you using data to differentiate for your class as well as individual students?

I just don't get it

The verdict is still out on the validity of grading homework in the eyes of many educators in my sphere of influence.  If the purpose of homework isn't to check for students' understanding, what activities do classroom teachers rely on figure out who "gets it" and who is still lost?  The next two diagrams summarize what I hear:

 
I regularly ask teachers I come in contact with to describe their formative assessment strategies.  Answers such as "think, pair share" and "observe the non-verbal cues of the students throughout the class period" come up quite frequently.  It's the stuff that often doesn't land on paper.  Great. Those are all examples of formative assessment that can drive future instruction.  Here lies the controversy:

 
Once students complete a few problems or write an essay on paper (or take notes or complete a word find or sometimes just write their name and date and turn it in - how sad is that..) many educators I know automatically feel like a point value needs to be assigned.  Why?  How is a "thumbs up/thumbs down" prompt any different from an activity that involves students writing down an answer or their thoughts on paper, especially when they serve the same purpose?  Why is it universally acceptable to grade one and not the other?



.....I just don't get it.  Can someone explain this to me?