Survey Says: Analog-centric Academics and Popular Perceptions

We do hope you’re sitting down for this one, folks. A recent story in the Higher Education section of The Economist is blowing the lid off a secret that has been kept tight for centuries. Ready? The world of academia does not always enthusiastically rush to embrace change.

Now, listen—if your pulse is still racing, you might not want to read the story itself; we’re happy to provide a summary. The piece is called “Learned Luddites,” with the subhead, “Many professors are hostile to online education.” But before you lunge for the panic button, allow us to provide some context from the story itself.

A recent study of faculty attitudes to technology by the online publication Inside Higher Ed found much skepticism about MOOCs…

Survey Says: Analog-centric Academics and Popular PerceptionsThis will come as little surprise to anyone who’s been following the increasingly wide inroads that MOOCs and online education have been making into traditional curricula—or to anyone who has spent time in higher ed. The San Jose State and UMass Amherst stories in particular will be familiar to regular Aspire readers, as will our perspective on the benefits offered by online higher ed, such as web-based tools for instructors and integrated experiential learning.

In any case, it looks like the title of the Economist story is on the mark—so far. However, let’s pick up that sentence where we left off.

…but also that staff who have actually taught on them are far more positive about their quality.

Seems like the jury is still out. Here’s another excerpt:

Nishikant Sonwalkar, the editor of MOOCs Forum, says professors do not want to teach on courses they did not create. At the same time they are concerned about “academic marginalization.”

The piece goes on to note that UMass President Emeritus Jack Wilson compares online courses to textbooks; after all, most instructors use textbooks, but very few of them write their own. Which is a fair point, but on the other hand, one text does not an entire semester make: Textbooks and other resources are what instructors combine to create their own courses and curricula. (A fairer comparison might be to a recipe; different cooks may use the same tools and ingredients to make the same dish, yet can come up with vastly different results.)

In other news, while the Economist equivocates, a post at the Chronicle of Higher Education on a recent Gallup poll indicates—at first glance, anyway—that opinion isn’t so evenly divided.

In early October, Gallup asked two groups, each composed of more than 1,000 adults, whether they thought “online education is better” in a series of categories. In terms of “providing a wide range of options for curriculum” and “good value for the money,” online education got slightly better scores than traditional classroom-based education.

But online education scored much worse in four areas: delivering “instruction tailored to each individual,” providing “high-quality instruction from well-qualified instructors,” offering “rigorous testing and grading that can be trusted,” and—finally, worst of all—dispensing “a degree that will be viewed positively by employers.”

The story doesn’t stop there; traditional bricks-and-mortar colleges (from, as far as we can tell, the Ivy League to the local community college) also came out looking better than their digital counterparts:

Only a third of the respondents rated online programs as “excellent” or “good,” while 68 percent gave excellent or good ratings to four-year colleges and universities, and 64 percent gave such ratings to community colleges.

All of this would seem daunting to anyone invested in the future of online education, except for one thing—here’s an excerpt from the Gallup survey’s notes on methodology:

Results from the Oct. 3-6, 2013, Gallup poll are based on telephone interviews with a random sample of 1,028 adults, aged 18 and older, living in all 50 U.S. states and the District of Columbia.

Samples are weighted to correct for unequal selection probability, nonresponse, and double coverage of landline and cell users in the two sampling frames. They are also weighted to match the national demographics of gender, age, race, Hispanic ethnicity, education, region, population density and phone status (cellphone only/landline only/both and cellphone mostly).

It looks pretty exhaustive, but what’s missing? Any indication of the participants’ familiarity with higher …