Children’s Technology Review (CTR) is an ongoing rubric-driven survey of commercial children’s digital media products, for birth to 15-years. Like Consumer Reports (no affiliation), CTR takes no advertising, and there are no entrance fees or gimmicks like affiliate links or hidden costs to publishers. Started in 1993 by Warren Buckleitner, the for-profit service is supported by the annual “Dust or Magic” institute and subscriber fees. It is dedicated to helping children by providing their teachers, librarians, publishers and parents stay up-to-date on the latest digital products. CTR is sold as a subscription, and is delivered weekly to subscribers, who also receive access to the review database, called CTREX (or Children’s Technology Review Exchange). Learn more at http://www.childrenstech.com.
LONG DESCRIPTION WITH HISTORY
Since 1993, Children’s Technology Review (CTR) has been the name of a systematic survey of children’s interactive media, with reviews written by “Picky Teachers” — reviewers with preschool or elementary classroom experience who have achieved inter-rater reliability on the same review instrument. The work is supported by “clean money,” namely subscriber fees, publication sales and Dust or Magic conference registrations. No income comes from sponsorships, selling award seals, grants or affiliate sales programs.
Originally a Master’s project based on research conducted from 1983 to 1993, the underlying theoretical framework behind the ratings has remained unchanged. They can be summarized by the following guiding questions:
- “What does the child walk away from the experience(s) with that they didn’t have when they first came to the experience(s)?”
- “How does the experience empower a young child”
- “Does this experience leverage the potential of technology, specifically interactive media, in a way that traditional, non-digital or non-linear experiences cannot?”
To provide objective reviews of children’s interactive media products. You can get a sense of our “voice” on children’s interactive media by watching a review in progress.
Picky Teacher (www.pickyteacher.com) is our avatar, serving as a symble for our voice — a mascot that symbolizes our passion for putting the child, and known pedagogy first when it comes to children and technology. She’s a tough grader with a distain for PR, marketing, advertising, affiliate sales, fancy writing, PR, politics and grown up agendas. She (or he) loves “magic,” giving high grades to technology products that foster empower children, and that foster active learning. She doesn’t like sluggish, buggy apps or games, sloppy writing, hasty illustrations or inaccurate content. She has a distain for copycats, and loves fearless leaders.
After ten years, the Picky Teacher mascot (who is actually Ann Orr, Ed.D. our one long time Sr. Editor who was kind enough to pose as the Picky Teacher) is back from sabbatical in the form of a free, public database, thanks to the work of Matthew DiMatteo, CTR’s Director of Publishing. We’ve decided that typing “picky teacher” into a browser is easier to remember than “children’s technology review.”
While we’re still not ready to remove the “beta” label from the review database we are ready to show our subscribers how it works.
CTR subscribers have full access to a new review database system that makes it easier to find, sort and [most importantly] join a conversation about a children’s interactive media product.
You’ll notice the ratings have been converted into a report card with grades, and it is now possible to comment, change a password or find a similarly designed product. This new design helps us fulfill our original 1993 mission: to make it easier than ever to find out what a real picky teacher would say about current products. Have a look and give us your grade.
DATABASE FACTS AT A GLANCE
- Products reviewed: commercial INTERACTIVE products marketed toward children birth to 15 (n=15,398 as of June 2014). These include apps, video games, web sites, hardware and software.
- Ratings assigned: 11,450
- Mascot: Picky Teacher, BS, MA, MS, BA PADI
- Number of staff: Four, plus interns
- Not reviewed: linear or non-interactive media, including books, videos, and many types of toys
- Philosophy: constructivist, technological empowerment of a child
- Date of first review: 1982
- Publisher: Active Learning Associates, Inc., 120 Main Street, Flemington NJ USA
- Editor: Warren Buckleitner
- CTR is independently supported by subscriber fees, sales of books, YouTube advertising and our series of Dust or Magic Institutes. Because Warren Buckleitner has been a contributor to the New York Times, CTR abides by the NYTimes rules for freelancers, when it comes to such things as media tours or product samples.
- No advertising (other than ads on our YouTube videos, which are selected by Google’s alogrithms).
- No consulting. We (painfully) turn down lucrative consulting and beta review offers.
- We work for children, not for publishers. We’re friends with many wonderful publishers who make children’s interactive products. However, our friends know that while we might like them as a person, our rubric may not like their product, if “like” is defined by a high or low rating. It’s about the science rather than the politics.
- No big grants from Bill Gates, Susan Crown and others. Sadly. But grant chasing can be a distracting affair.
- No Affiliate Links. Unlike many review sites, we do not make money using “affiliate link” programs offered by Apple or Amazon. We have no financial incentive for you to purchase or not purchase a particular product.
The core evaluation system was designed as a Master’s project in 1982; the first published review was written in 1984, based on work at the High/Scope Educational Research Foundation by Warren Buckleitner. Reviews were published from 1984 – 1993 as an annual book called The Survey of Early Childhood Software (High/Scope Press).
In 1993, Warren left High/Scope to start a graduate degree, and turned the annual into a bi-monthly newsletter called Children’s Software Revue. The first issue was published in the Spring of 1993; the name was changed in 2009 to Children’s Technology Review (CTR). The CTR review database has been used for research for mainstream publications.
As of June 2014, the database contained 15,398 entries of all forms of interactive products. These include apps, tablets, video games, web sites and some toys. CTR no longer provides comprehensive coverage of children’s apps. We target high profile apps with low ratings, or high potential apps from small publishers who lack the resources for publicity.
HIGH RES ART
- CTR Masthead
- Editor’s Choice Seal for 2014
- Picky Teacher when she’s happy
- Sample cover (of the monthly issue)
STAR RATINGS, RUBRICS, SEALS AND AWARDS
The generic rubric was an attempt to map a Piagetian-inspired (constructivist) theoretical framework onto the then emerging category of commercial digital media. It is a generic system, weighted to reward products that foster feelings of child control with a higher scores. The rubric used today is largely the same as the original. When used by a novice reviewer, however, the instrument does not generate reliable ratings.
The inter-rater reliability process typically takes at a minimum of 20 products and 6 months by a person familiar with the basics of educational psychology. Multiple rubrics help to better understand specific genres of products. The internal motto: “our rating system is the least-worst out there.” While our approach does generate quantitative ratings, both as a percent and as a 1 to 5 star form, it is important to understand the larger context of these numbers, as well as the current state of the market.
This seal is Picky Teacher’s approval for any product.
Products that receive higher ratings (generally 4.3 stars or better) may be deemed “Editor’s Choice.” This means the chances are low that a child will be disappointed by the product.
Note that we use a dated seal system that publishers can display at their option. Awards are issued without fanfare; no money changes hands as part of this award or rating process.
External validity is increased by working with other reviewers and organizations. The CTR database drives the KAPi Awards (given at CES) and the BolognaRagazzi Digital Award, given each spring at the Bologna Children’s Book Fair. The Dust or Magic events give CTR reviewers a chance to compare notes with other researchers, publishers and reviewers, in our ongoing search for five stars.
The money we make from Dust or Magic registrations and sales from subscriptions supports this work.
There are no sponsoring organizations or external funders to please or displease.
The task of assigning a fixed rating on any interactive product is an art; a process that is loaded with subjectivity. We assign ratings with this in mind, and see a review as the start of a conversation about a particular process. If you read the copy of the review, you should be able to see exactly why a particular product didn’t get a higher rating. We also take an extra effort to assign high or low ratings accompanied with specific examples.
It is also important to understand that CTR’s rating system is an academic attempt to apply a constructivist, active learning theoretical framework to children’s interactive media; and this bias is burnt into the rubric. We acknowledge that not everyone shares the same ideas with this definition of quality.