We use Optical Character Recognition (OCR) during our scanning and processing workflow to make the content of each page searchable. You can view the automatically generated text below as well as copy and paste individual pieces of text to quote in your own work.
Text recognition is never 100% accurate. Many parts of the scanned page may not be reflected in the OCR text output, including: images, page layout, certain fonts or handwriting.
'Songbirds Of The South," selling Ballard 8C Ballard products on America's No. 1 Negro audience station. Another in WDIA's daylong parade of stars steadily increasing sales for advertisers like these*, attracted by consistently high Hoopersf plus WDIA's renowned selling power.
'Borden's Starlac *Treet Blades *lpana 'Water Maid Rice
t HOOPER RADIO AUDIENCE INDEX
14.353 calls
City: Memphis, Term. Aug. -Sept. 1950
TRTP Sets WDIA A B C D E F
17.1 22.6 21.9 17.7 14.1 14.0 11.3 5.6
'WDIA. Memphis. Tennessee, Bert Ferguson, Mm . Harold Walker. Com'] Mgr.. John E. Pearson Co.. Rep."
agreement as to what constitutes the most essential kinds of data; (2) lack of any over-all analysis of the differences (their nature and magnitude) between results of the present rating services, why the differences occur, and what data is most useful under what circumstances; (3) ignorance of both sponsor and agency people (seldom research people at either agency or sponsor) of what ratings and related data mean.
Item number one of what sponsors can do now to dispel some of the billowing radio/TV research fog: if you have a research department, ask them to set down in a few simple statements ( using only five-cent words) exactly what you need in radio or TV research — and why. The why is important. You should, and can, know precisely how every last research item you're buying is going to help you do a job.
Suppose you need an estimate on what it's costing you to reach a thousand radio homes. You're told that a certain rating figure will enable you to make the estimate. As indicated in the case of Hooper and Pulse, there are several kinds of ratings possible. You could use any of them to figure your cost per thousand homes. But there can be startling differences in the results.
Here's an extreme case, just to illustrate.
If 30 people out of a hundred listened to all of a 30-minute program, both Hooper and Pulse would give it a 30 rating (30*7? ). Since everybody in this example listened each minute, the 30 represents the average number of people, or homes, listening per minute.
But since our 30 listeners also represent the total number who heard any part of the program (in this case they heard it all), the 30 also represents the total audience.
Suppose now that each person listened for just one minute each to the program. Since Hooper's technique measures the average audience per minute he would now give the program a rating of 1. But Pulse, whose technique measures all listening to any nart of a program, would still give the irogram a rating of 30.
This is an extreme example which wouldn't happen just like this in actudity — but it emphasizes the kind of difcerences in audience that different atings represent. It's obviously important to be aware that all ratings
aren't alike. You don't have to be an expert to keep the wool from being pulled over your eyes. Technical questions about the validity of the different techniques you can leave to your research advisers. But you can know exactly why you're getting one kind of rating instead of another.
This should of course be checked with your agency. But you should insist that the agency research head be in on it and not subject to a veto or by-pass by an account executive. Account executives sometimes have very understandable reasons of their own for overruling research executives.
The typical research executive wants facts and he wants to interpret them as objectively as he can. Account executives are concerned with the results for their client of their decisions and recommendations. If research data, including ratings, seems at times to get in the way, that may be too bad for the research data. Most agencies naturally want elbow room when it comes to justifying decisions. Many of them would just as soon not pin themselves down too closely on the whys and wherefores of radio and TV research. But you'd better know explicitly what you need and why — and you can.
If you haven't a radio-TV research specialist in your own organization (the majority of sponsors don't) and you don't fully understand or agree with what an agency executive tells you on this subject, check with an independent research consultant.
Item two: find out something about the heads of the research organization whose services you're using. What about their integrity? Will they tell you the truth about their sample? Are their research brains competent? If they have weaknesses, you ought to know what they are.
Item three: beware of careless comparison of ratings — the danger of this is evident from foregoing examples.
Item four: how was the information gathered? By telephone, meter, diary, personal interview? Each has its biases; but each also has some advantages. For example, meters (which require a fixed sample) can give you cumulative audience ( the net, or unduplicated, audience) figures for a week, a month, etc. Diaries can give cumulative audience and other figures that meters yield, but usually for a period of a week only. Many diary samples, which keep a written record or "diary" of listening for a single week,
66
SPONSOR