Edclick

Edclicking Keyword Cloud

By Dr. Harry Tennant

Comments: Edclicking

To the blog

Enter a comment

Monday, October 24, 2011

A/B testing for effective consequences

Ever wonder why junk mail looks the way it does? I'm looking at a letter from a credit card company that says on the envelope, "50,000 Bonus Points - That's $500 Toward Your Next Vacation." Inside there are several individually folded pieces of paper. There's variation in the fonts used. Some text is bolded. There's a P.S. There's a call to action: a form to fill out to apply for the card. A prepaid return envelope that's stamped "Priority Processing."

None of this is accidental or done on a whim. It's all been tested for effectiveness. Added features that increase the success rate (in this case, the number of people who apply for the card) has been measured. The features are there because they work.

The simplest and most common form of testing of this kind is A/B testing. Randomly divide your batch into two sets, A and B, and make them identical except for one feature, for example, set A has a P.S. and set B does not have a P.S. Then keep track of whether you got better response from set A or B. (Actually, that's an easy one: tests have shown that the P.S. is among the text most likely to be read in the letter.)

A/B testing for effective consequences

Let's say you're seeing too many tardies in your school. What is the most effective consequence? Talking to the student? Lunch detention? After school detention? Try A/B testing.

First, define "effective consequence." Let's say consequence A is deemed more effective if the students with that consequence have fewer tardies over the subsequent two weeks than students with consequence B.

Second, randomly select the consequence. Flip a coin: heads, give consequence A, tails give consequence B. Keep track of the consequence assigned to each student so you can give them the same consequence if they're tardy again.

Keep track of the outcomes. (Shameless plug: with a product like our Discipline Manager, this happens automatically).

After a while, look at the outcomes. Did consequence A perform clearly better than consequence B? For example, if talking to the student was clearly more effective than lunch detention, make it your new policy. Now, is it also better than after school detention? Run another A/B test.

What if the numbers are nearly the same? Then it doesn't matter. Either just pick one as your policy or assign either consequence as the mood strikes you. (For the sake of simplicity, you're probably better off choosing one and sticking with it.)

Why bother?

Why bother with all this? If you're intent on improving your school, you need to know if the changes you make are actual improvements or just changes. Collect some data and know.

Posted at 9:19 AM Keywords: continuous improvement 3 Comments

 
Seth Stephens said...
I think this brings up a great point Dr. Tennant. I believe that all to often people(like administrators) come up with a new idea and think, it is great from their point of view, so it must be great for everyone. Many times these "great" ideas sound good, but when it comes time for implementation, the "foot soldiers" are the ones who discover the shortcomings. Your A/B plan could really help because it would judge overall performance of an idea, while also creating a great set of circumstances for feedback.

Monday, October 24, 2011 7:04 PM

   

Enter your comment

Your name



To fight spam, please enter the characters in the image.