About Children’s Technology Review

Children’s Technology Review (CTR) is a continually updated rubric-driven survey of commercial children’s digital media products, for birth to 15-years.  It is designed to start an educational conversation about commercial interactive media products; with the underlying admission that there is no perfect rating system. Designed for teachers, librarians, publishers and parents, CTR is sold as a subscription, and is delivered both weekly and monthly to subscribers, who are granted unlimted access to the CTREX review database.  See the CTREX launch anouncement

HISTORY

Started in 1993 after ten years of research at the High/Scope Educational Research Foundation, Children’s Technology Review (CTR) publishes reviews of children’s interactive media products. Reviews are written from an educator’s point of view, voiced by “Picky Teachers” —  reviewers with preschool or elementary classroom experience who have achieved inter-rater reliability on the same review instrument. The work is supported by “clean” money, namely  subscriber fees, publication sales and Dust or Magic conference registrations. No income is derived from sponsorships, selling award seals, grants or affiliate sales programs.

Originally a Master’s project (Survey of Early Childhood Software, Buckleitner, 1984), the underlying theoretical framework behind the ratings has remained unchanged over the years. Ratings can be summarized by the guiding questions that a reviewer takes into the evaluation process:

  • “What does the child walk away from the experience(s) with that they didn’t have when they first came to the experience(s)?”
  • “How does the experience empower (or dis-empower) a child”
  • “Does this experience leverage the potential of technology in a way that traditional, non-digital or non-linear experiences cannot?”
  • “How does this product compare with similar products?” 

WATCH A REVIEW

Get a sense of our “voice” on children’s interactive media by watching a review in progress.

YouTube Preview Image

LIMITATIONS & MANAGING BIAS
No review system is free of bias, we we are no exception. Our evaluation instrument (and resulting ratings) was/is designed to reward settings that empower children, fostering the an active, responsive, child controlled setting. This bias comes directly from our instrument, which was designed to layer a rubric over Jean Piaget’s stage theory. Lower ratings are given to product that remove control from a child or that may contain sloppy interactive design elements, such as an over-illustrated menu, sound that can’t be adjusted, an introduction that can’t be interrupted , sloppy language localization, gender or ethnic bias, poor leveling, commercial agendas and other factors that might be noted by the reviewer.

 • We are selective in what we review. There are now thousands of children’s products (mostly apps) being published each year, we no longer are able to comprehensively cover the market. We do not review videos, non-interactive toys, books, movies or many free apps.

• We make mistakes. Our team is dedicated, but overworked and underpaid. If you think reviewing video games and apps is fun, try spending and evening matching shapes with Dora. Actually playing with Dora’s not so bad, but shooting a video and writing an accurate review can be tedious.  So please know that your paid subscription helps us buy more coffee and review more products, and that we’ve designed our reviews so it is possible for a publisher or subscriber to respond directly to the author of the review.  As we like to say “a review is the start of a conversation.”

• We only selectively child test. We continually preach about the value of child testing, but in reality we only do a very limited amount, in the Mediatech Foundation. This testing is limited to high stakes products. We understand that child testing is a very in-exact process, and that a child’s opinion of a product will vary greatly based on the social setting, their developmental level and numerous other factors. Still, what they have to say, and how a product helps them behave, is amazingly helpful information.

• We try to help publishers get five stars. If we are critical of a product, we attempt to provide a concrete example of why. We take an extra effort to assign high or low ratings accompanied with specific examples.

CTR’s rating system is an academic attempt to apply a constructivist, active learning theoretical framework to children’s interactive media; and this bias is burnt into the rubric. We acknowledge that not everyone shares the same ideas with our definition of quality.

Look at our service with this in mind.

WHO IS THE PICKY TEACHER?

Picky Teacher (www.pickyteacher.com) is our mascot, and our “voice.” The idea is that everyone has an inner “picky teacher.”

She’s a fictional character that embodies our bias toward preferring education products  that empower young children, contains tried and true pedagogical techniques, quality illustrations and audio at a fair price. 

She’s a tough grader with a distain for PR, marketing, advertising, affiliate sales, fancy writing, PR, politics and grown-up agendas. She (or he) loves “magic,” giving high grades to technology products that empower children, and that foster active learning.  She doesn’t like sluggish, buggy apps or games,  hasty illustrations or inaccurate content. She doesn’t like copycats, and loves fearless leaders.  

After ten years, the Picky Teacher mascot (who is sometimes represented by a photo of Ann Orr, Ed.D., former CTR Sr. Editor) is  back from sabbatical in the form of a free, public database, thanks to the work of Matthew DiMatteo, CTR’s Director of Publishing. Typing “picky teacher” into a browser is easy to remember, and it instantly reminds you of our perspective on digital products. 

CTREX: CHILDREN’S TECHNOLOGY REVIEW EXCHANGE

In the summer of 2014, our review database was revised to allow flexible searches of the latest products. We decided to employ a freemium (or Velvet Rope) business model. The database is free to browse, but full reviews and reports, along with back issues, are limited to paid CTR subscribers.

You’ll notice it is now possible for subscribers to comment, change a password or find a similarly designed product.

This new design helps us fulfill our original 1993 mission: to make it easier than ever to find out what a real picky teacher would say about current products, and to let others — including the publisher — in on the conversation.  Have a look and give us your grade.

CTREX DATABASE  FACTS AT A GLANCE

  • Products reviewed: commercial INTERACTIVE products marketed toward children birth to 15 (n=15,398 as of June 2014). These include apps, video games, web sites, hardware and software. Not all product reviews are publicly available.   
  • Ratings assigned: 11,450
  • Mascot: Picky Teacher, BS, MA, MS, BA PADI
  • Number of staff: Four, plus interns
  • Not reviewed: linear or non-interactive media, including books, videos, and many types of toys
  • Philosophy: constructivist, technological empowerment of a child
  • Date of first review: 1982
  • Publisher: Active Learning Associates, Inc., 120 Main Street, Flemington NJ USA
  • Editor: Warren Buckleitner

FUNDING

  • CTR is independently supported by subscriber fees, sales of books, YouTube advertising and our series of Dust or Magic Institutes. Because Warren Buckleitner has been a contributor to the New York Times, CTR abides by the NYTimes rules for freelancers, when it comes to such things as media tours or product samples. 
  • No advertising (other than ads on our YouTube videos, which are selected by Google’s alogrithms). 
  • No consulting. We don’t accept consulting and beta review offers. It’s not easy, but that’s not what we’re about. 
  • We work for children, not for publishers. We’re friends with many wonderful publishers who make children’s interactive products. However, the children’s publishing community knows that while we might like them as a person, our rubric may not like their product, if “like” is defined by a high or low rating. It’s about the science rather than the feelings.  
  • No grants from Gates, Susan Crown, MacFound or others. Grant chasing is hard and can be a distracting affair and soft money can have subtle strings. If you you like what we do, please use this link to subscribe  If you’d like to fund us, call 908-284-0404.  
  • No Affiliate Links. Unlike many review sites, we do not make money using “affiliate link” programs offered by  online stores like Apple or Amazon.  It’s not that sites that use these services are biased. However, it’s not what we’re about. We’re not a store or catalog; we don’t feature products, and we have no financial incentive for you to purchase or not purchase a particular product. 

HISTORY

The core evaluation system was designed as a Master’s project in 1982; the first published review was written in 1984, based on work at the High/Scope Educational Research Foundation by Warren Buckleitner. Reviews were published from 1984 – 1993 as an annual book called The Survey of Early Childhood Software (High/Scope Press).

In 1993, Warren left High/Scope to start a graduate degree, and turned the annual into a bi-monthly newsletter called Children’s Software Revue. The first issue was published in the Spring of 1993; the name was changed in 2009 to Children’s Technology Review (CTR). The CTR review database has been used for research for mainstream publications.

As of June 2014, the database contained 15,398 entries of all forms of interactive products. These include apps, tablets, video games, web sites and some toys. CTR no longer provides comprehensive coverage of children’s apps. We target high profile apps with low ratings, or high potential apps from small publishers who lack the resources for publicity.

STAR RATINGS, RUBRICS, SEALS AND AWARDS

The generic rubric was an attempt to map a Piagetian-inspired (constructivist) theoretical framework onto the then emerging category of commercial digital media. It is a generic system, weighted to reward products that foster feelings of child control with a higher scores. The rubric used today is largely the same as the original. When used by a novice reviewer, however, the instrument does not generate reliable ratings.

The inter-rater reliability process typically takes at a minimum of  20 products and 6 months by a person familiar with the basics of educational psychology. Multiple rubrics help to better understand specific genres of products. The internal motto: “our rating system is the least-worst out there.” While our approach does generate quantitative ratings, both as a percent and as a 1 to 5 star form, it is important to understand the larger context of these numbers, as well as the current state of the market.

EDITOR’S CHOICE AWARDS

Link Text

This seal is Picky Teacher’s approval for any product. So if you agree with the Picky Teacher’s theoretical point of view on interactive media, chances are good that you’ll trust this mark.

Products that receive higher ratings (generally 4.3 stars or better) may be deemed “Editor’s Choice.” This means the chances are low that a child will be disappointed by the product.

Note that we use a dated seal system that publishers can display at their option. Awards are issued without fanfare; no money changes hands as part of this award or rating process.

External validity is increased by working with other reviewers and organizations. The CTR database drives the KAPi Awards (given at CES) and the BolognaRagazzi Digital Award, given each spring at the Bologna Children’s Book Fair. The Dust or Magic events give CTR reviewers a chance to compare notes with other researchers, publishers and reviewers, in our ongoing search for five stars.

The money we make from Dust or Magic registrations and sales from subscriptions supports this work. There are no sponsoring organizations or external funders to please or displease.

 

HIGH RES ART

© 2014 Children's Technology Review. All rights reserved.