Children’s Technology Review is a longitudinal study of children’s interactive media products that started in 1985. The reviews and ratings are housed in an internal database with 12,522 commercial products (as of May 2011) for approximately 18 platforms. Subscribers have online access to select fields from this database. Here are some articles designed to help you understand our rating process; or you can scroll down to see our evaluation instrument.The Hunt for Five Stars
“What does a 5 star interactive media product for children look like?” At CTR, this question is our holy grail. Like the moving world record line that is super-imposed over an Olympic swimming event, it represents the continually fluid yardstick for quality that we hold every app, game, toy or site against… Are These 5 Star iTunes Ratings for Real?
We’re not going to accuse somebody of posting fake ratings. But when a poorly designed app gets 23 five star ratings, things start to look fishy. If you’re the publisher and you’d like explain what’s going on… 10,500 Objective Reviews, At Your Fingertips
Children’s Technology Review are easier to find, and they look better, too. The improved format includes a cleaner, easier-to-browse layout and one-click search scripts, making it possible to zoom in on the reviews you want. The database is available to only CTR subscribers. CTR, May 2012: Low Ratings and Sad Faces
We’re always sorry to give any product a less-than-glowing-review, but like a doctor that tells you need to loose a few pounds, our job can’t involve hurt feelings. Will our rating of a product change? Not unless the product does, and that leads our readers to a question we think about a lot. “How can a product earn five stars?” How Does a Product Get a Five Star Rating?
Last week (Oct 3, 2011) I spoke on a panel on the evaluation of interactive media at the Fred Rogers Center and I referred to this page, from the September 2011 issue of Children’s Technology Review. I issued a strong disclaimer that every theory can find a champion in technology — in other words, one person’s view of quality can (and should) differ from another person’s view. As Jesse Schell (of CMU) reminded the group, measuring quality in an interactive product is like trying to assess beauty…. How not to make a children’s eBook: Notes from the 2013 Jurors of the BolognaRagazzi Digital Prize.
To better understand our rating process, we recommend that you watch this video.
Of these, nearly 10,000 have been formally evaluated using the same instrument, by trained raters that have achieved inter-rater reliability on the instrument. Many older products are stored at the Strong Museum of Play where they can be used for research. More circulation friendly titles, such as for game consoles, are available for public access at The Mediatech Foundation.
The six categories in the instrument can help you better understand factors that may be related to “quality” in a children’s interactive media product. In brief, the instrument favors software that is easy to use, child controlled, has solid educational content, is engaging and fun and is designed with features that you’d expect to see, and is worth the money given the current state of children’s interactive media publishing.
CTR editors typically recommend programs that receive a 4.2 star rating or better; a product must get at least a 4.4 rating to get an Editor’s Choice seal. You can easily search for these titles by rating in the Children’s Software Finder (just enter 4.2 or greater for the rating search field).
How the Ratings are Assigned
Educators (CTR reviewers) who have been trained on the use of the instrument below. “Trained” means that they have achieved inter-rater reliability when rating the same title independently. In cases when two reviewers come up with different ratings, a third reviewer is consulted, along with additional child testing until all raters “can sleep with” their rating. The system is not perfect; but it attempts to be a “least worst” rating system. In assigning ratings, the reviewers consider as much feedback from children by way of the Mediatech Foundation, test schools and select families that match the product; whose preferences are known. It is also important to note that the same instrument has been used since 1993, making it possible to compare products from year to year. This also accounts for the fact that ratings have significantly increased with time.
CTR’s Generic Rubric: An Attempt to Measure “Quality”
To get a program’s 1-5 star rating, you need to do some simple math; a process that is automated for reviewers using the instrument, but is defined in detail here, for replication purposes. Add up the points in each category (always = 1 point, some extent = .5 points, never = 0 points, and N.A. = Not Averaged) and then divide by the number of items in the category. This number can then be converted to a 0 to 5 point scale. It is important to match the instrument with the type of software. In other words, you can’t rate a program low in “Educational Value” if it is designed primarily as a game. Even though that’s where the “N.A.” category comes in. Finally, it is very important to consider the date that the review was written. Remember that a highly rated program in 2003 might be equal to a poorly rated program in the context of current day software and hardware.
CHILDREN’S INTERACTIVE MEDIA RATING INSTRUMENT
N = Not Applicable, or 0 points.
SE = Some Extent, or .5 points
A = Always, or 1 point
NA = Not counted in the calculation
1. Ease of Use (Can a child can use it with minimal help?)
Note that this factor is combined with “Childproof” on the short form of this instrument.
N SE A NA
__ __ __ __Skills needed to operate the program are in range of the child
__ __ __ __Children can use the program independently after the first use
__ __ __ __Accessing key menus is straightforward
__ __ __ __Reading ability is not prerequisite to using the program
__ __ __ __ Graphics make sense to the intended user
__ __ __ __Printing routines are simple
__ __ __ __It is easy to get in or out of any activity at any point
__ __ __ __Getting to the first menu is quick and easy
__ __ __ __Controls are responsive to the touch
__ __ __ __Written materials are helpful
__ __ __ __Instructions can be reviewed on the screen, if necessary
__ __ __ __ Children know if they make a mistake
__ __ __ __Icons are large and easy to select with a moving cursor
__ __ __ __Installation procedure is straightforward and easy to do
II. Childproof (Is it designed with “child-reality” in mind?) (note that this factors is combined with I. in the short form of this instrument).
__ __ __ __ Survives the “pound on the keyboard” test; more recently, the digital playdoh test.
__ __ __ __ Offers quick, clear, obvious response to a child?s action
__ __ __ __The child has control over the rate of display
__ __ __ __The child has control over exiting at any time
__ __ __ __The child has control over the order of the display
__ __ __ __Title screen sequence is brief or can be bypassed
__ __ __ __When a child holds a key down, only one input is sent to the computer
__ __ __ __Files not intended for children are safe
__ __ __ __Children know when they’ve made a mistake
__ __ __ __This program would operate smoothly in a home or classroom setting
III. Educational (What can a child learn from this program? What do the walk away from the experience with, that they didn’t have when the first came to the experience?)
__ __ __ __Offers a good presentation of one or more content areas
__ __ __ __Graphics do not detract from the program’s educational intentions
__ __ __ __Feedback employs meaningful graphic and sound capabilities
__ __ __ __Speech is used
__ __ __ __The presentation is novel with each use
__ __ __ __Good challenge range (this program will grow with the child)
__ __ __ __Feedback reinforces content (embedded reinforcements are used)
__ __ __ __Program elements match direct experiences
__ __ __ __Content is free from gender bias
__ __ __ __Content is free from ethnic bias
__ __ __ __A child’s ideas can be incorporated into the program
__ __ __ __The program comes with strategies to extend the learning
__ __ __ __There is a sufficient amount of content
IV. Entertaining (Is this program fun to use?)
__ __ __ __The program is enjoyable to use
__ __ __ __Graphics are meaningful and enjoyed by children
__ __ __ __ This program is appealing to a wide audience
__ __ __ __ Children return to this program time after time
__ __ __ __Random generation techniques are employed in the design
__ __ __ __Speech and sounds are meaningful to children
__ __ __ __Challenge is fluid, or a child can select own level.
__ __ __ __The program is responsive to a child’s actions
__ __ __ __The theme of the program is meaningful to children
V. Design Features (How “smart” is this program?)
__ __ __ __ The program has speech capacity
__ __ __ __Has printing capacity
__ __ __ __Keeps records of child’s work
__ __ __ __”Branches” automatically: challenge level is fluid
__ __ __ __A child’s ideas can be incorporated into the program.
__ __ __ __ Sound can be toggled or adjusted
__ __ __ __Feedback is customized in some way to the individual child
__ __ __ __ Program keeps a history of the child’s use over a period of time
__ __ __ __Teacher/parent options are easy to find and use
VI. Value (How much does it cost vs. what it does? Is it worth it?) Considering the factors rated above, and the average retail price of software, rate this program’s relative value considering the current software market.
__ __ __ __ __ __ __ __ __ __
1 2 3 4 5 6 7 8 9 10
PROCEDURE FOR GENERATING A STAR RATING:
We use an automated set of calculated fields to generate the ratings.
First count the number of items in the category, then add up the total points. Divide, and multiply by 100 to get the percent. You can divide by .5 to convert the overall percent to the 1 to 5 star rating. Consider also any extra hardware attachments required to get full potential of the programming, e.g., a sound card, CD-ROM, etc.
Copyright 1985, by Warren Buckleitner
Don’t forget that this form is generic! There is no substitute for child testing.
To use it properly, you have to look at a lot of similarly designed products, and remember that the “NA” field can be a particularly powerful tool to influence the overall score. If you’re looking for an easy to print form to use with test families (especially good for video games, use the Serious Games Testers Evaluation form ).