Broadcasting (Jan-June 1933)

Record Details:

Something wrong or inaccurate about this page? Let us Know!

Thanks for helping us continually improve the quality of the Lantern search engine for all of our users! We have millions of scanned pages, so user reports are incredibly helpful for us to identify places where we can improve and update the metadata.

Please describe the issue below, and click "Submit" to send your comments to our team! If you'd prefer, you can also send us an email to mhdl@commarts.wisc.edu with your comments.




We use Optical Character Recognition (OCR) during our scanning and processing workflow to make the content of each page searchable. You can view the automatically generated text below as well as copy and paste individual pieces of text to quote in your own work.

Text recognition is never 100% accurate. Many parts of the scanned page may not be reflected in the OCR text output, including: images, page layout, certain fonts or handwriting.

Checking Coverage by Mail Queries Listeners Found More Truthful in Answering Letters; Station Popularity Proves Fickleness of Public EXPLANATION of the CBS method of mapping the listening areas of its stations, which was started in the previous issue of BROADCASTING, is concluded herewith in a further justification of the questionnaire as a reliable method of obtaining accurate data on station "circulation". Mr. Karol asserts that the mail furnishes a comparatively inexpensive means of obtaining this information and that the results are more dependable than would be obtained by personal interviews. By JOHN J. KAROL* CBS Director of Market Research IN ANY measurement of station ranking it is apparent that one can either measure station popularity as such or the popularity of all the programs which that station broadcasts. The popularity of any station is, of course, the sum of the popularity of all its programs. It would naturally be impossible to measure the popularity of all programs accurately by the mail questionnaire. We therefore decided we could measure station and network popularity by asking directly for it. We limited our questionnaire to the following two simple questions: 1. What station do you listen to most ? 2. What other station or stations do you listen to regularly? The use of only two questions instead of several, mathematically reduced the possible error to a minimum figure. The use of two simple questions, the second of which was largely an extension of the first, involving no change of category or concept, reduced the possible error still further. Psychologically Sound IN ADDITION, the above questions are most in keeping with listener psychology. For numerous reasons, the name of a station tends to impress itself especially strongly on the mind of the listener: 1. A constant repetition of the station name throughout all programs heard in the period during which the listener is using his radio. 2. The habit of looking up individual programs in newspapers or other publications in terms of the station over which they will be broadcast. 3. The necessity for identifjdng the name of the program with the call letters of the station from which it emanates. All these tend to emphasize the identification of station call letters. The questions asked call for an immediate intuitive response. This response is not the product of a special judgment, but of habitual reactions based upon certain psychological phenomena: 1. Auditory memory, in the sense of constantly hearing the call letters of the station. 2. Visual memory, in the sense of calling to mind the number on the dial corresponding to the station. 3. Kinesthetic or manual memory, in the sense of the daily repetition of the physical act of tuning in. The development of a sound procedure for the collection of the data in question necessitated, at *Editor's Note: This is the conclusion of an address delivered at the Institute of Education by Radio, Ohio State University, May 4. The first part appeared in the May 15 issue of Broadcasting. the outset, a decision as to what would constitute an adequate sample. We experimented considerably before we decided on the actual number of questionnaires to be sent to each city. We finally sent out about twice as many cards as were necessary for statistical significance. For example, 25,000 cards were mailed in the five boroughs of New York City and pro-rated according to the number of sets in each borough. When the first 500 cards were returned, the final answer was established. The actual percentage of votes received by each station did not vary more than 1 per cent with each successive tabulation of 500 returns. A total of over 4,000 cards was received from the New York mailing. Other cities were similarly checked. Price, Waterhouse and Company tabulated each day's returns separately and the percentage of votes cast for Columbia stations in the first five days' returns from 80 cities showed a variation of only 1.2 per cent from the final tabulation of votes. The mailing lists were obtained from the latest telephone directories in each city. This was justified by the close degree of correlation between radio ownership and telephone ownership. Both radio and telephone owning homes tend to exclude the lowest income groups. Variation in Cities THE TOTAL percentage return has increased in each successive Price-Waterhouse audit. In the fourth study (which has just been completed) the percentage return was 18.1. Perhaps this is a commentary on the growing interest in radio, or perhaps it merely reflects the greater degree of radio ownership among telephone homes. Comparing the returns from individual cities in the four audits, it is interesting to note that certain cities consistently return a low percentage, while others show a high percentage return in each audit. Among the low return cities are Chicago, New Orleans, Memphis, San Antonio, Mobile and Birmingham. Among those cities which show a high percentage return are Denver, Akron, Syracuse, Worcester, Youngstown, Atlantic City and the Pacific coast cities. The higher returns from the Pacific coast and certain industrial cities of the mid-west may reflect the activity and booster spirit. The low percentage of returns from certain southern cities may be explained by the fact that radio ownership is relatively lower in the south. Million Questionnaires TO DATE we have sent out over a million questionnaries, the returns from which have all been tabulated by Price, Waterhouse and Company. These mailings provide a great deal of material for interesting analyses. We have learned that the post card questionnaire is an extremely sensitive barometer. And we found that in this study none of the usual objections to a mail questionnaire were valid. The one principal objection to a mail questionnaire is its possible selection of lower income groups for replies. There are two answers to this. In measuring the relative popularity of two or more network stations, under the present structure of American broadcasting, there are no specific differences of appeal — no network which broadcasts programs which appeal to upper income levels nor one which appeals to lower income levels. But we went beyond that. We made a personal-interview investigation pro-rating the calls according to the number of homes in each income class. This "controlled" proportioning of the interviews produced results which checked the mail question findings within a fraction of a per cent. Results Found Accurate BUT I started to say that we found the mail questionnaire particularly sensitive as an accurate reflection of true station popularity. For example, a survey was conducted in Hartford last summer by an independent research organization. Telephone calls were made by a crew of girls continuously from 9 a.m. to 10 p.m. Radio owners were asked to report what they were listening to at the moment of the call and what station was tuned in. The week's summary of this survey revealed a station ranking considerably at variance with our third PriceWaterhouse audit conducted 8 months previously. Since this result affected one of our affiliated stations we were anxious to check up on it. It was difficult to believe that our station had doubled its popularity vote in the course of eight months. We sent out 1,000 questionnaires using the same form as that used in our Price-Waterhouse audits and approximately 160 usable returns were received. A tabulation of these returns checked within 2 per cent of the actual percentages shown in the telephone survey. Not convinced, we had another thousand questionnaires mailed to the same city. The returns again checked almost exactly with the previous test mailing. The explanation for this sudden shift in audience preference was found in the termination of synchronization of two stations. The station which dropped in popularity was deprived of full time operation. Popularity Fickle BECAUSE of shifts in the popularity of individual stations, it is necessary to conduct investigations at frequent intervals. For a network it is also necessary to obtain a simultaneous picture of popularity in all sections of the United States. The personal interview method would require an enormous staff and would be considerably more costly than the questionnaire method. Because we were seeking simple and direct information we feel that the questionnaire method is ideally adapted to the determination of network popularity. Another application of the mail questionnaire in radio research was that employed by Prof. Elder in his studies for CBS. These investigations of radio as a salesproducing medium were conducted entirely by mail after the method was checked by personal interviews in one city. The technique was extremely simple and logical. A letter enclosing a simple postcard questionnaire was mailed from the Massachusetts Institute of Technology. Again telephone directory lists were used. The letter and questionnaire were carefully worded to give no hint as to the real objective of the survey and merely requested information concerning brands of various products used in the home. Re Personal Interviews TEN different categories of products, as well as a question concerning radio ownership and time of use, were included on the questionnaire. A usable return of about 15 per cent was obtained. Incidentally, these studies revealed the fact that the average radio set is in use about 4 hours a day. Since returns were obtained from homes without radio sets as well as from radio homes, we were able to set up a control group which enabled us to measure radio's influence on consumers of nationally advertised brands of these 10 categories. The use of the mail questionnaire which required no signature, in this study, tended to avoid any possible coloring of results. Psychological factors often influence {Continued on page 17) Page 14 BROADCASTING • June 1, 1933