Broadcasting Telecasting (Jan-Mar 1957)

Record Details:

Something wrong or inaccurate about this page? Let us Know!

Thanks for helping us continually improve the quality of the Lantern search engine for all of our users! We have millions of scanned pages, so user reports are incredibly helpful for us to identify places where we can improve and update the metadata.

Please describe the issue below, and click "Submit" to send your comments to our team! If you'd prefer, you can also send us an email to mhdl@commarts.wisc.edu with your comments.




We use Optical Character Recognition (OCR) during our scanning and processing workflow to make the content of each page searchable. You can view the automatically generated text below as well as copy and paste individual pieces of text to quote in your own work.

Text recognition is never 100% accurate. Many parts of the scanned page may not be reflected in the OCR text output, including: images, page layout, certain fonts or handwriting.

RADIO'S SHORT-CHANGE RATINGS DIFFERENCE in techniques used in measuring print media and radio leaves the latter BiBB^- ■ Jk holding the short end of the stick, Walter B. Wbm>^^^L Dunn of H-R Representatives, New York, '-".'iJKl told the Pittsburgh Ad Club last week. This is DBhJHK a condensed text. THE LAST TIME I was here, the speaker, a station representative set about crucifying all ratings; then he proceeded to build one up, a messiah of his own persuasion. From iconoclasm he turned into an acolyte at the altar of Alfred Politz. I was a little confused. Evidently the distinction was that his man didn't try to tell you how much audience your station had; he told you how much penetration you had. Since audience is people and penetration is people, it was a fine line he was walking. Actually the broadcasters' problem is not too many ratings, but too many ratings by print people working with samples much too small, with limitations adequate for the finite character of prints limited circulations, addicted to partisan techniques all subject to absurdities and faddishness ... all subject to deliberate misinterpretation by our competitors. It's easy enough to pillory any rating service by listing its booboos. For instance: 4 One service found a rating for a station off the air. 4 One service contained six tabulation errors on one summary page — all in favor of the subscriber. 4 One service found up to 77% more children 4-11 years of age than actually existed viewing Disneyland in one of the major markets. / One report of a major market in February last year found 10% more families than existed in the area listened to Mickey Mouse during the measured period. 4 One service blasted the diary method in its pitches and in its promotion. Three years later, hot and cold running diaries wired for sight and sound outnumbered machines four-to-one in its latest service. Little wonder that more children 4-11 were listening to the financial news report at 8 a.m. than adults. This is classic by now. Evidently there are thousands of Lenny Rosses languishing undiscovered in the primary grades of Los Angeles. It is always important to remember when using a given rating service or comparing one with another, that different techniques measure different things at varying measures of efficiency and, furthermore, have an inherent built-in bias or two peculiar to their method. Equally as unfair to broadcasting as limitations imposed on it by the finite techniques developed to measure print media is the small sample. This is our curse! This is the cross we bear. There is not a PhD worth his salt who can't prove beyond the shadow of a doubt that every iron clad theory of statistics is solidly behind him, backing him up four square! But speaking of small samples, a vice president of the Hooper service admitted under oath back in 1947 that his firm completed only one plus call per quarter hour to a home that had a set-in-use. I insist that this is why radio sets-in-use has fallen so. I insist that this is why daytime tv ratings are so erratic. When mama turns off her set, puts her hat on, and slams the door behind her, sets-in-use in V/i Pacific markets drops to zero, according to this. In most rating services, one home usually equals one-third of a rating point. Often a buying decision is made on as little as .3 of a rating point. So, if Mama snaps off Queen to change diapers, Queen may lose one-third of a point and your station may lose an advertiser. Nielsen Radio Index Pacific is worse. With only 165 families, one West Coast mama represents .6 of a point. Now I want to make one of the points I came here to make. There is a big discrepancy a mile wide between the probability sample of 1200 and the active sample of 100, which you get down to when sets-in-use is 8 to 12%. When Mr. Nielsen tells you how many sets are turned on, he is using his probability sample, or as near to it as 10% mechanical failure will allow. At least all 1,080 working Audimeters are working to furnish an answer — yes or no — as to which of them are using their sets and which are not. But when only 100 homes try to tell me which of 2700 radio stations got the brass ring at the park last Saturday, then this cottonhead is getting off the merrygo-round. Yes, there's a big spread between the full probability sample and the active sample. If we must be measured by such samples; if we must be measured by techniques and theories developed before radio was born, then I insist — let us make competitive media take our medicine. Or more properly, suffer their own medicine. Let's measure them the way they measure us. The Southern California Broadcasters Assn. underwrote just such a measurement last year. This survey, conducted by The Pulse, Inc., used the same general research techniques as those employed by Dr. Starch and by previous surveys sponsored by The Advertising Research Foundation. Interviewers carried copies of the Los Angeles Mirror News and Los Angeles Herald Express for Thursday afternoon, April 26, and copies of the Los Angeles Times and Los Angeles Examiner for Friday morning, April 27. Persons who had read any of the newspapers were invited to look at the advertisements while turning each page slowly. Each page of the newspaper was scanned separately and identification or recognition of the advertisements was noted. It was not necessary for the ads to have been read in whole or in part. Recognition of having seen or noted that ad was sufficient for credit to be given. This interviewing technique is similar to that employed by other research organizations in the newspaper field. However, the survey had one important difference. The difference between The Pulse surveys and other newspaper readership surveys is largely in the method of presentation of the data. For example, "continuing studies" of newspaper reading previously conducted by the Advertising Research Foundation were based on "readers" of the newspaper. First, a person who had read the newspaper was found, and his recognition of having seen the ads in the paper was recorded. The readership percentages reported are based only on readers of the newspaper. Twenty per cent observation, therefore, means that 20% of the readers of that issue of the newspaper saw the advertisement, with no relationship to market penetration. Figures obtained by this fnethod cannot be projected against the total market but only against the newspapers' circulation. The mighty Los Angeles Times, for instance, reaches only 19% of Los Angeles. WHY RADIO'S NOT COMPARABLE Radio research, on the other hand, has always been based on percentages of the total market or on total radio homes, which in the Los Angeles area constitute 99% of the total market. Thus radio ratings have never been comparable with newspaper ratings obtained by previous methods. In order to obtain comparability with radio research, this [Pulse] survey showed all percentages on a base of total homes in the Los Angeles area. According to ABC statements, the four Los Angeles metropolitan dailies had coverage of the Los Angeles city zones as follows: Examiner 15%; Herald-Express 16%; Mirror-News 12%; Times 19%. From the figures, it is apparent that the largest degree of observation an advertisement could receive in the Los Angeles Times is 20%. This figure would be reached only if the advertisement were read or observed by someone in every home reached by the Los Angeles Times. Thus a 20% rating for an advertisement would correspond to 100% rating obtained by conventional newspaper readership studies. A 25% observation obtained by usual methods would result in a 5% rating based on percentage penetration of homes. Obviously, rating percentages shown in this presentation will be much lower than those normally shown for newspapers. This does not mean that fewer readers were found, but merely that the percentages are lower, being of a larger base. For the first time pene Broadcasting • Telecasting March 18, 1957 • Page 119