How to Improve the University's
    National Ranking and Its Academic Reputation



                  Robert Ehrlich
                  November, 1999


Acknowledgements: I am grateful to the support of the Provost's
Office, under whose auspices this report was produced.  I am also
grateful to the following persons who reviewed a draft copy of this
report; many of them provided useful feedback, which was
incorporated in this final version: Helen Ackerman, Kathleen
Cheney, Peter Denning, Maria Dworzecka, Karen Gentemann, Renate
Guilford, Marcelle Heerschap, Craig Herburg, Donna Kidd, Linda
Schwartzstein, Hale Tongren, and W. Michael Wood.  I am also
grateful to Kelly Richardson, who assisted with the data analysis,
and Stan Zoltek for helping put this on the web.


Introduction.

"The annual U.S. News & World Report rankings emerged in 1985, and
are now awaited with the kind of anticipation usually reserved for
the oscars."[1]  U.S. News and World Report (USNWR) ranks colleges
and universities in four categories: 227 national universities, 504
regional universities, 162 national liberal arts colleges, and 429
regional liberal arts colleges.  The distinction between the
national and regional university categories is based on Carnegie
classification.  George Mason University moved from the regional
university category to the more elite national category in 1996,
though it remains to be seen if we will remain in that pool as of
next year.[2]  

The national universities are divided into four tiers of roughly
equal size.  When USNWR publishes its rankings it only ranks
schools within the first tier of 50 schools, while in the three
lower tiers, it simply lists schools alphabetically.  Until 1999
George Mason had been in the fourth (lowest) tier, but we moved
into third tier this year.  The question of whether it is desirable
to devote significant resources merely to move from tier 3 to tier
2 in the national rankings is highly debatable.  Recently George
Washington University is reported to have made a costly
unsuccessful effort to break into tier 1.  However, while the
impact of our moving into tier 2 is more questionable than a move
into the top tier, the costs and effort needed may also be far
less.  In addition, much of the effort needed to improve our USNWR
ranking would also help our academic reputation generally. 

The criteria used by USNWR for ranking national universities, and
the rankings themselves have been downloaded from the USNWR
website, and they are included as appendices A and B.  This study
has been undertaken in order to answer the following questions:


1. Why did George Mason move from tier 4 to tier 3 in 1999?

2.Which factors used by USNWR are most promising in terms of being
able to affect our ranking?

3. What are some specific scenarios under which George Mason could
move from tier three to tier two, and what would be required in
each case?

4. What are some specific things the University could do to improve
its academic reputation generally?

Since some of the discussion on questions 1,2, and 3 involves
various technicalities, we first address the more general issue
addressed in querstion 4.



SECTION I:

What are some specific things the University could do to improve
its academic reputation generally?

In subsequent sections we consider specific ways to move the
University into tier 2 of the USNWR category of national
universities.  Some of these suggestions would become moot should
the University be moved out of the category of national
universities.  This change seems probable in light of the impending
change in the definition of Carnegie classifications, which
requires Research/Doctoral I schools to award doctorates in at
least 15 programs.[2]  We currently award doctorates in only 10
programs, and it would be foolish to imagine our adding five more
Ph.D. programs just to make it into the more elite Carnegie class. 
However, it may be worth recalling that a number of our Ph.D.
programs (particularly those in CSI and IT&E) are in fact
"umbrella" degree programs, so a disaggregation of these programs
could easily put the University over the 15 threshold with zero new
resources, were this change considered desirable.  It is important
that we consider the pros and cons of such a move, especially how
it affects the perception of the University relative to our biggest
competitors, who will (apart from JMU) likely be in the new
Research/Doctoral I category.  

Recommendation 1: Carnegie Classification. Consider carefully the
pros and cons of disaggregating several Ph.D. programs, so as to
put the University in the new highest Carnegie Classification.

However, whichever category the University is placed in, it is
still valuable to understand which factors are most important in
improving our ranking.  In view of the great weight USNWR gives to
the academic reputation variable, the issue of improving the
University's academic reputation generally is inextricably linked
to improving our USNWR ranking.  Moreover, improving our academic
reputation cannot help but have an effect on our student
selectivity, and many of the other factors that go into the USNWR
formula.  Unfortunately, some of the ways to improve our academic
reputation, while worthy of attention, could prove quite costly. 
Here I will dwell on those that are not.  

A University's academic reputation is built to a significant degree
on that of its faculty and its programs.  But, public perception in
these areas is at least as important as the reality.  In the past,
the University has been fairly successful in garnering media
attention to some of its faculty, although often this is the result
of individual faculty contacts or achievements.  For example,
appearances of a number of faculty on various radio and TV news
shows, and in the popular press cannot help but promote our image.

But, there are at least two obstacles to getting good publicity out
regarding academics at the University, one being cultural and the
other being systemic.  A university's academic reputation means
something very different to faculty, administrators and prospective
students.  Favorable attention in the mass media may carry much
more weight to prospective students and administrators than to
faculty.  In academia, many faculty are loathe to call attention to
their own achievements, and regard those of their colleagues who
seek or obtain recognition in the media with some distaste.  For
example, a scientist whose name is widely known to the public would
be usually be known by his colleagues as a "popularizer," and not
a real scientist, even though the two roles are actually not
antithetical.  The reward structure at most universities also
bestows far more recognition (and financial rewards) on those who
advance the scholarship in their discipline (in journals read by
fellow experts) than on those who garner public attention.  The
systemic problem about getting greater media attention is that
there is often no direct way for the University Relations Office to
become aware of potentially exciting developments that might make
for good news stories.

Recommendation 2: PR Work by Faculty. Reward those who achieve
recognition among the general public for their scholarship, even if
their scholarly work is in the form of communicating work in their
field to a broad audience, rather than original scholarship.  This
could be accomplished in a concrete way by encouraging deans and
department chairs to recognize and reward faculty who have brought
good attention to the University.

Recommendation 3: Media and Community Outreach.  Put someone in
charge of contacting individual academic departments and faculty to
help them bring the interesting work they may be engaged in to a
wider notice on the part of the media and the public.  These
contacts should be made on a regular basis.  Also, encourage each
academic department to designate a faculty liason in charge of
public relations.  This individual should take an aggressive role
in getting the word out about exciting developments in their area,
possibly enlisting the aid of allies in the business community.

The issue of getting favorable recognition for the University on a
local and statewide level needs to go forward in tandem with
achieving greater national recognition.  At the University
currently most faculty view the job of attracting better students
to the University as being primarily the job of the Admissions
Office.  They need to become much more involved in ways that take
advantage of their individual expertise.

Recommendation 4: Faculty involvement in Recruiting.  Strongly
encourage academic departments to establish linkages with high
school and NVCC faculty in their disciplines, and also encourage
individual faculty to engage in visits to high school classes in
Northern Virginia and throughout the state to speak about exciting
work done in their disciplines.  This sort of service work needs to
result in concrete and specific rewards for the faculty who engage
in it that are commensurate with the kinds of rewards that occur
when a faculty member generates a scholarly publication.

Increasing the pool of very good students who apply to the
University is only part of the answer to improving our academic
reputation.  We also need to improve our chances of getting the
best students to accept our offer.  Currently, far too many very
good students are using us as a "safety" school, in case they don't
get accepted by their first or second choice school.  A major
factor in the minds of many superior students is the kind of
scholarship aid a university offers.  Regrettably, our university
lags behind our competitors on this score.

Recommendation 5: Aid for Merit Scholarships. Increase the amount
of merit aid for academically superior students to bring us to a
par with our closest competitors.

The University should keep an eye out for high profile projects
that would bring us good long-term publicity.  (The World Congress
may have been high profile, but its long-lasting impact is
unclear.)  There is a highly innovative project which would: (a)
bring us a lot of good publicity, (b) team us up with area
companies, (c) bring a steady stream of visitors to the campus, (d)
cost very little, (e) be a one-of-a-kind project found at very few
universities, (f) serve as a valuable resource for area teachers,
(g) build on an area of existing strength, namely: offer University
land to build a state-of-the-art interactive science and technology
center.  According to Time magazine this is now a boom time for
such centers, which have tripled in number during the past decade. 
An astounding 115 million people visit these centers each year. 
(See the Time story in appendix D for a fascinating look at how
these facilities combine science, learning and fun.)  A group of
entrepreneurs is currently raising funds from area companies to
build such an interactive science/technology center at an
undetermined location in Northern Virginia.

Recommendation 6: Science Technology Center.  Meet with the group
planning the new science technology center, and consider offering
a location on campus, or on other University-owned land.

I have saved two particularly controversial recommendations for the
last.  Currently, avenues for programs and departments to call
attention to their academic programs and the scholarly work done by
their faculty are filtered through the collegiate academic units. 
In an academic culture in which all deans are in some sense equal,
the system results in approximately equal space in University
publications for each college or school, equal time on "Open House"
programs for prospective students, etc.  This system, while it may
serve the interests of many of the smaller schools and colleges,
does not serve the interests of the University as a whole very
well.  It tends to present a distorted picture of the University to
outsiders, and it greatly understates the importance of the
individual programs and departments in the largest collegial units. 
It also impedes getting the word out regarding achievements and
exciting programmatic developments.

Recommendation 7: Programs Before Colleges.  Allocate space in
brochures and time on programs for potential students in a manner
that allows all individual programs equal possibility of exposure,
even if they happen to be in a collegial unit comprising very many
programs.

Nowadays, prospective students obtain much of their information
about universities through the web.  The University website is
probably much more important than any individual publication in
generating interest on the part of potential students. According to
a recent survey reported in the Chronicle of Higher Education, what
website visitors care most about is "a comprehensive look at the
academic offerings of the institution, including reviews of courses
and faculty members."[5]  Unfortunately, the University website
suffers from the same fixation we have in our publications of
filtering everything through the collegial units, before getting to
the level of programs and departments.  (For example, just try
finding out about physics starting from the main GMU website -- you
can't do it without first going through the College of Arts and
Sciences, unless you change the setting to sort alphabetically.  A
naive request to find sites with "physics" results in around 500
hits!)  One final example of the difficulty of obtaining academic
information about the University on a programmatic basis is based
on my unsuccessful attempt starting from the main University
website to find out in what areas we offer doctoral work.

Recommendation 8: Better Web Information and Connections.  Change
the University website to make it simple for users to link to
websites for specific academic programs and departments.  Also,
have the website thoroughly reviewed by an outside expert who is
familiar with the best practices used around the country.



SECTION II: Analysis of USNWR Rankings

Why did George Mason Move from Tier 4 to Tier 3 in 1999?

Although USNWR doesn't publish rankings within tiers beyond tier 1,
it does furnish this information on request to the particular
schools being ranked.  Here is the information supplied by USNWR
for this year (2000) and last (1999) for the University.  The rank
given in each category is our position within the 227 national
universities -- recall that the lower the number the better.

                                             1999                  2000
                                             rank                  rank
Financial resources                          183                   200
Academic reputation                           96                   110
Graduation & retention                       150                   151
Student selectivity                          195                   192
Faculty resources                            202                   211
Alumni giving                                159                   184
Graduation rate                              161                   185

OVERALL RANK                                 177                   169

Thus, we rank best in the academic reputation category (just above
the mean of the 227 national universities), and poorest in the
faculty resources category (which is based primarily on
considerations of class size and faculty salaries), where we
currently rank in the 93rd percentile.  A careful look at the
figures for the two years will reveal a most curious fact: our
ranking improved very slightly in only one category from 1999 to
2000 (student selectivity), and yet our overall rank also somehow
improved despite our being ranked lower in the other six
categories!  In fact, since in 1999 the 4th tier began at school
173, the small change from 177 to 169 was enough to move us up to
tier 3.  Our University's good fortune occurred solely as a result
of a change in USNWR's methodology.  Until this year they had been
computing each school's overall rank based on a weighted average of
the individual category ranks.  Their new formula is to compute an
overall rank based on the point scores in each category rather than
the rank.  This can make a big difference if certain categories
have a small number of schools that have many more points than all
the rest.  

This methodology change made the most difference in the category of
financial resources, defined as dollars spent per student (which
accounts for 10% of the total score).  Cal Tech has such a high
score in this category compared to all other schools that it has
5.7 times the points of the number 2 school (MIT).  Under the old
system of ranking within a category, Cal Tech gets the maximum 10
percent of the score, while number 2 MIT gets almost 10 percent
down to the lowest school which gets 0.  Under the new system of
simply assigning points, Cal Tech gets the maximum 10 percent, MIT
gets 10/5.7 = 1.75 percent, and all the rest get less than 1.75
percent.  Essentially, what the new system does is greatly devalue
the weight given to financial resources for all schools other than
Cal Tech.  Our university greatly benefitted from this devaluation
in view of its low ranking in this category (200 out of 227 this
year).  Were it not for the fact that our University benefited
greatly from this change in methodology, one might be tempted to
point out to USNWR how ridiculous the new system is!


Which factors used by USNWR are most promising in terms of being
able to affect our ranking?

In order to answer the question of what such a move to tier 2 would
take, we need to be able to reproduce the ranking system used by
USNWR.  The 16 factors that go into the ranking formula are grouped
into clusters and listed in appendix A with their assigned relative
weights.  While the definitions of most of the factors are obvious,
a few may not be: (1) the alumni giving rate is the fraction of
alumni that donate to the school, and (2) the academic reputation
is a number based on a survey of presidents, provosts, and deans of
admissions at colleges and universities in the same category as the
school being ranked.

USNWR publishes numerical values for 11 of the 16 factors for every
school in the rankings.  It has deliberately chosen not to publish
all the factors, in order to prevent competitors from duplicating
or using their ranking system.  The 5 missing factors have been
flagged with a "?" in appendix A.  We have, however, been able to
obtain slightly older data from other sources for two of the
missing factors flagged with a "+."  Thus, the factors on which we
have no information account for only 5.5 percent of the score used
to rank each school.[3]  One other problem we faced in trying to
reproduce USNWR rankings was that a number of schools have missing
data.  For example, a number of schools including George Mason do
not report the percentage of freshmen in the top 10 percent of
their high school class.  In the case of missing data, we needed to
estimate the numerical values based on other indicators, such as
using average SAT score as a surrogate variable in the example just
cited.

Given the full data base of 227 national universities, we applied
the USNWR ranking formula to each school, and calculated a score. 
Our results are shown in appendix C, with schools listed in
descending order based on our rank shown in column 2.  Column 1 of
the table shows the USNWR rank, so a comparison of the two columns
shows how well we did in duplicating the USNWR ranking system. 
But, recall that the comparison only holds for the first 50 schools
in tier 1, beyond which USNWR doesn't rank.  The average
differences in rank between our ranking and USNWR for the 50
schools in the first tier was just under two places, which seems
quite good considering the missing data and other problems. 
Actually, we may have come even closer than two places, because
USNWR lists a large number of schools as being tied in the rankings
-- see appendix B -- and we arbitrarily broke those ties, based on
the order in which they listed the schools.

It might be expected that the very close agreement between our
ranking and that of USNWR found for first tier schools might widen
somewhat in higher tiers.  Thus, we rank George Mason at number 153
(in the middle of tier 3), while USNWR ranks us at 169 (closer to
the bottom at 176).  It seems likely that the 16 places difference
in our University's rank is due mainly to our failure to use
faculty salaries adjusted for area cost of living, which would tend
to make our ranking for the University higher than USNWR.[4]  In
order to improve our ranking to move to the 2nd tier we would need
to go from 169th place up to 120th place.  The differences between
the computed scores (see column 3) for these places is 0.083.  In
other words, George Mason would need to improve from its current
score of 0.249 to 0.332 to move to tier 2.

Appendix C is in the form of a spreadsheet, which can be used
dynamically by anyone to see the effects of changing any of the
variables for any university.  An additional "Virtual George Mason"
has been added to the table for convenience in order to see the
effects on our ranking when changing any of the variables from
their actual values.  Suppose we first see what it takes to move to
tier 2 by changing one variable at a time.  Below we show what
changes would need to be made in any one of the 13 variables to
increase our score to 0.332 by itself.  These values can easily be
verified by simply changing the appropriate entry in the
spreadsheet for "Virtual George Mason," and observing the change in
its score that results:

        Change needed in each variable to move us to tier 2

        variable                     change needed
1       $K per student:              8       to 553
2       faculty salary($K):          58.2    to 153.5
3       Academic rep:                2.9     to 3.96        promising
4       Fr. ret. rate:               74%     to 165%
5       Graduation rate:             51%     to 96.7%       promising
6       Diff between rates:          -8%     to 84.3%
7       Classes under 20:            27%     to 140.4%      promising
8       Classes over 50:             17%     to -99.2%
9       % FT faculty:                79%     to 709%
10      Average SAT:                 1040    to 1884        promising
11      Top 10% of HS class          15%     to 165%        promising
12      Acceptance rate:             63%     to -262%       promising
13      Alumni giving %:             10%     to 119.6%      promising
        
Clearly, some of the 13 variables (numbers 4,7,8,9,10,11,12,13)
would have to be changed to impossible values, and others would
need to be changed by ridiculously large amounts to move us to tier
2 on their own.  For example the "devalued" $ per student variable
would need to increase 6900 percent, which shows just how
unpromising it would be.  The variables labelled "promising" are
those which seem to offer the best chance of moving us up in the
rankings, even if on their own they can't do the whole job.  More
specifically, we call a variable "promising" if a change of 10
percent of what is needed to move us to tier 2 is easy to imagine.

Some explanation is in order concerning the inclusion of variables
7, 10, 11, 12, and 13 in the promising category.  Even though the
SAT score and acceptance rate would need to change by an impossible
amount to lift us to tier 2 on their own, variables 10-12 are
highly correlated with one another, so if we increased one of them
we would wind up increasing all 3.  Variables 7 and 13 have been
labelled promising because there may be ways to have substantial
increases in those two categories with minimal effort -- discussed
later.


What are some specific scenarios under which George Mason could
move from tier 3 to tier 2, and what would be required in each
case?

                     specific scenarios A,B,C and D for moving into tier 2:

Variable (current value)                     A              B             C              D
Academic rep (2.9)                           3.2            3.3           3.2            3.1
Fr. ret rate (74%)                           86%            80%           78%            76%
Grad rate  (51%)                             71%            62%           58%            56%
Average SAT (1040)                           1100           1120          1100           1080
Top 10% of HS class(15%)                     20%            25%           25%            20%
Acceptance rate (63%)                        53%            50%           58%            55%
Alumni giving rate (10%)                     10%            20%           35%            60%
Classes under 20 (27%)                       27%            30%           44%            44%

Scenario A involves mainly relying on dramatic improvements in our
retention and graduation rates, along with an increase in our
academic reputation, and in the quality of incoming students.  
Scenario B involves greater increases in our academic reputation
and in the quality of incoming students, and less dramatic
increases in retention and graduation rates, along with an increase
in alumni giving.  Scenarios C and D involve very substantial
increase in both the alumni giving rate and in the percentage of
classes under 20 students.  It may be possible to manage these
latter two increases at relatively modest effort.

With respect to the alumni giving rate, recall that the definition
is based simply on the fraction of alumni that donate money
regardless of the amount they donate -- even if it were only a
dollar.  Perhaps a campaign could elicit such small contributions
on an annual basis.  However, such a campaign would need to be done
without stressing its value in affecting our USNWR ranking, lest it
draw adverse publicity.  With respect to the percentage of classes
under 20, normally it would be quite costly to have a large
increase in this area.  However, there may be an opportunity to
take advantage of an ambiguity in the questionnaire form USNWR uses
to gather this information from us.  We have not until now been
counting individualized sections in the classes under 20 category. 
Were we to count them, we would improve from 27% classes under 20
(one of the poorest scores in the nation) to 44% (the value used in
scenarios B and C).  However, such a change might raise eyebrows at
USNWR which keeps track of previous year values on their forms.



Footnotes:

1. Clifford Adelman, "Why we can't stop talking about the SAT?," in
the November 5, 1999 issue of the Chronicle of Higher Education.

2. The basis for the Carnegie classifications is changing in 2000,
which may put the University back in the "less elite" pool. 
Henceforth, Doctoral/Research I schools are those with 50 or more
doctorates in at least 15 disciplines -- see article in November 5,
1999 Chronicle.

3. However, the data obtained from other sources was 3 years out of
date, and it did not exactly correspond to the items USNWR used:
(a) we used faculty salaries at each institution unadjusted for
area cost of living, and (b) we used institutional expenditures per
headcount rather than per FTE student.

4. The data for faculty salaries was obtained from the NSF which,
unlike the USNWR data, was not corrected for area cost of living.

5. Wendy R. Leibowitz, "Colleges urged to lure more visitors to web
sites," November 12, 1999 issue of the Chronicle of Higher
Education.