We use Optical Character Recognition (OCR) during our scanning and processing workflow to make the content of each page searchable. You can view the automatically generated text below as well as copy and paste individual pieces of text to quote in your own work.
Text recognition is never 100% accurate. Many parts of the scanned page may not be reflected in the OCR text output, including: images, page layout, certain fonts or handwriting.
e s.iiiu' person s rcpl) com. mis :ui inaccuracies, too.
uls nuisi also be kepi on the ^position ol .ill such erroneous plies (except where the specific rating procedure is well known ml the records are accessible
r stud) b\ MK( oi Us auditors, i
. i .1
Dls( I osi Rl PROCI Dl RES
I he second pari ol the adopted minium standards applies to disosuxe tiou each rating service lould report its surveys.
l I ach rating report should in"concise desci iption" of the ethodologies used. I his should indefinition ol the sample, the chnique used to covei it. the area ivolvcd, the time slot and a statetent whether or not "weighting"
en applied.
I ach report must also menpa all known omissions, errors hd biases that might affect results. ; I urthei. each report must so cite an) deviations from standd procedures that might ei>lor te results tor example, that 20 inrviewers involved were working on leii first survey.
4 I he rate-of-c o o p e r a t i o n night (and won I must also be notI ", example, each report should te the number of households inall) selected -plus the number btuall) providing usable informaffl that was incorporated into the port i Hut it some usable informami was not used. that. too. should reported. )
5. In a prominent place, each re
>rt must compare its sample data
ith comparable primarv -source
such as households or indi
duals) to show the degree to which
i sample reall) does represent the
rse" it is said to be measur
-' i I hese are to be broken down
L counties or reasonable count)
OUpings.) Services that use the
UDC sample over and over again in
eir regularly-issued reports must
•t the same data in each report, but
\latc it onl) semi-annually.
6 (u (graphic areas surveyed
iould be clearl) defined, with the
on criteria given. Thus, if
rea surveyed is Metropolitan
1 ork as defined by the I S
ensus, it should be s(> recorded
the report.
Survevs executed fol a cific client shall clcailv show the repot t is special, not p.u t of a n
lai syndicated service in fact the
client must be named and the poll's loim.it be made clcailv dis
tinguishable from that ol th< lai report
B1 II I -IN I RROR
I he BR( statement took |
care to acknowledge thai audience measurement is subject "to main kinds oi error."
Some, ol course, aie noil sim
pling errors. I hese ma) result from
the methodology used, the maniiei m which (he survey's conducted, oi
— even more unpredictably sun
pie non-cooperation or non re
spouse.
"However." the MR( announce menl explained, "even a true probability sample is likely, to include
errors due to the operation ol
chance in the selection ol (he sain pie.'" I he rize ol (his chance depends, among other things, upon the Size ol the sample. ( I he sampling research is subject to such "sampling error.")
The sample variation that is due onl) to the si/e ol the sample m.i\ be expressed as "statistical tolerance" or "standard error."
S. thus, each rating report should list, preferabl) on its front page, several keys i 1 i the standard error: (2) the formula used -to select the specific sample in the first place. (3) a chart or table that
lists the statistical tolerances fot
one and or two standard errors — in other words, a chart that shows just what these variations are (and what the) mean i when applied to typical items included in the report. It must also be pointed out that. JUS1 because estimates of sampling error have been shown, thai doesn't necessaril) mean that a piobahihtv sample design has been achieved
( ()\\ l-RIIM, I(> \ Kl POM
9. When a rating service converts basic raw data into a rating report, it must show all the "weighting" or data adjustments thai have
been applied, along with the I sons for so doing I his information must be available to all users of said report.
Ki I ach rat indicate the nun turns thai ate acqu
to its stand. nd poM
Such a minimum ma) diffei from
seivk. to
i I i methodolog the
nunilvi ..| stations b
ilk numbci ol horn
dio m tv.
I I w here i isucd on
a regulai basis each rai
must indicate the normal s.impL turn fot eacfa sin .. Vnd when the
return is below normal i but not | low the required minimum), this.
too. must be |*>mtcd out pr.
abl) in a prominent pla
it Rim R (I \R|| |( \||<>\>
12. Cross tabulations
demographic and consumer information must b up. lined bv the minimum sample base requii
W lien the sample tor one pel iod
is inadequate lor reporting such information accurately, it ma) necessar) to combine the samples ol two. three oi more successive
periods
13. It an) station has
to "special, non-regular promotional techniques" that might hypo distort — its ratings, the rating
service must also poinl thai out.
14. I he rating service must also publish an) other distorting influences that it is aware ol 1 1 might include unusual weather. tastrophes, political oi social .vents, or preemptions such as world ries, elections. Congressional h« ings even transmission failures
In addition to the ab standards." whu pplicable to
all rating servk ecific stand
aids" will also be established HR( Since these will have to tailored individual!) eh spc
eitu technique in
they*H evolve onl) over a p. time.
S that are expected to
most helpful in developii
C standards, h include
questionnaires that some rating services have already filled out submitted to BR<
participating I |N
I \\l and COLI VM met!
stud ■
pril 20 1964
35