A_map_of_New_England,_being_the_first_that_ever_was_here_cut_..._places_(2675732378).jpg
RWhitcomb-editor RWhitcomb-editor

George McCully: We need to battle AI robotization

From The New England Journal of Higher Education, a service of The New England Board of Higher Education (nebhe.org)

Every reader of this journal is being affected by the highly exceptional historical phenomenon we are all experiencing: an age of total transformation, of paradigm-shifts in virtually every field of human endeavor. Our own field—postsecondary education and training—is just one among all the others. Younger colleagues, though they may not like it, are experiencing this as a given and generally constructive condition, building their future theaters of operations. Senior colleagues raised and entering the profession in the 20th Century paradigm of “higher education,” experience current transformations as disruptive—disintegrative and destructive of their originally sought-for and later accustomed professional world. Students seeking credentials for future jobs are confused and problematically challenged.

It helps to understand all this turmoil as an inexorable historical process. This article will describe that process, and then address how we individually, and organizations like NEBHE, might best deal with it.

We happen to be living in a very rare kind of period in Western history, in which everything is being radically transformed at once. Paradigm-shifts in particular fields happen frequently, but when all fields are in paradigm-shifts simultaneously, it is an Age of Paradigm Shifts. This has happened only three times in Western history, about a thousand years apart—first with the rise of Classical Civilization in ancient Greece; second with the fall of Rome and the rise of medieval Christianity; and third in the “early modern” period—when the Renaissance of Classicism, the Reformation of Christianity, the Scientific Revolution, the Age of Discovery, the rise of nation states, secularization and the Enlightenment, cumulatively replaced medieval civilization and gave birth to “modern” history in the 19th and 20th centuries.

Modernity, however, is now unraveling, in a transformation with significant unique features.

First and most noticeable is its speed, occurring in a matter of decades (since circa 1990) rather than centuries. The acceleration of change in history, driven by technology’s increasing pace and power, has been going on for centuries—perhaps first noticed by Machiavelli in the Renaissance. Today, the driving transformational force is the rapidly accelerating innovations in digital and internet technology, in particular, increasingly autonomous Artificial Intelligence (AI).

Second, for the first time our technology is increasingly acknowledged to be running ahead of human control; it is becoming autonomous and self-propelled, and we are already struggling to catch up with it.

Third, whereas the first three transformations were intended by human agents, for the first time now, driven by technological advances, we have no clue as to where the technology is headed—what will be the dénouement or if that is even possible.

And fourth, under these conditions of constant change both internally and all around us, strategic planning in any traditional sense is impossible because there are no solid handles we can grasp and hold onto as collaborators or guides into the future. We are adrift in an unprecedently tumultuous sea of change.

For people in post-secondary education and training, this critical situation is especially agonizing because we are in society at various thresholds of adulthood, where personal and professional futures are crucially chosen and determined. We have extraordinarily heavy responsibility for the futures of individuals, of society, of humanity and of our planet—precisely when we have inadequate competence, coordination and self-confidence. We are where humans will address the innovations and disruptions critical to the future, where strategic knowledge and intelligence will be most needed, and where historical understanding will therefore be crucial.

Let us first acknowledge that each, and all, of our jobs sooner or later, can and probably will be robotized by AI. Its technical capacity already exists, noted by many journalists and scholars, for thinking and writing (publicly available GPT-3, and next year GPT-4), visual and musical arts, interactive conversations, cerebral games (chess, Go, et al.) and other problem-solving activities, at quality levels equal to and frequently excelling human-generated work. Significant technological advances are happening almost weekly, and AI nowadays develops these autonomously, written in code it developed for its own machine-learning use. Its aims are not excellence, truth or other value-intensive products, but common-denominator performances adequate to compete commercially with works created by humans and indistinguishable (by humans—though software is being developed intended do this) from them, produced at blinding speed. Suddenly there could appear countless new “Bach” fugues, novels in Hemingway’s style, etchings by Rembrandt, news articles and editorials or academic work—all mass-produced by machines.

Administrative functions—numerary, literary, interactive, decision-making, etc.—will be widely available to ordinary individuals and institutions. What will remain for which humans are needed to do, at prices that yield living wages, is a real question already being considered hypothetically.

Speed bumps helping to shield us from faster robotization are (temporarily at least) the availability of sufficient capitalization, and the time required for dissemination. Here, the fact that we work much more slowly than AI is a temporary but ultimately self-defeating blessing. The driving incentive for takeover is the robots’ greater cost-effectiveness. In the long run, robots work more cheaply and much faster than humans, outweighing losses of quality in performance.

For individuals, our best defense against robotic takeover is for each of us to identify and enhance whatever aspects of our jobs that humans can still do best, which means that we should all start redefining our work in humanistic and value-intensive directions, so that when robotization comes knocking, decision-makers will go for the low-hanging fruit, allowing the easiest transitions first, leaving some margins of continued freedom for humans to continue doing their jobs.

A clear possibility, and I believe necessity, for postsecondary education and training lies in that distinction, extending from individuals’ lives and work to institutions and organizations like NEBHE and their instruments such as this journal. The rise of machine learning (AI) is a wedge, compelling us to cease referring to all post-secondary teaching and learning as “higher education.” There is nothing “higher” about robotic training and commercial credentialing for short-term “gig economy” job markets. Let us therefore first define our terms more carefully and precisely.

“Education,” as in “liberal education,” traditionally means “self-development”; “training” customarily means “knowledge and skills development.” The two are clearly distinct, but not separate except in extreme cases; when mixed, the covering designation depends on which is primary and intentional in each particular case.

In other words, our education helps define who we are; our training helps define what we are—doctor, lawyer, software engineer, farmer, truckdriver, manufacturer, etc. Who we are is an essential and inescapable part of all of our lives that’s always with us; what we are is optional—what we chose to do and be at given times of our lives.

Education is intrinsically humanistic and value-intensive, therefore most appropriately (but not necessarily) taught and learned between humans; training can be well-taught by AI, imparting knowledge and skills from robots to humans. Ideally, to repeat for emphasis, both education and training usually involve each other in varying proportions—when education includes knowledge and skills development, and training is accompanied by values. But these days especially, we should be careful not to confuse them.

For individuals, certainly education and possibly training are continuing lifelong pursuits. AI will take over training most easily and first, especially as rapid changes and transformations overtake every field, already producing a so-called “gig” economy in which work in any given capacity increasingly becomes temporary, more specialized and variegated, affecting the lives and plans of young employees today. Rapid turnover requires rapid increases in training, certifying, credentialing programs and institutions. Increasing demand for it has evoked many new forms of institutionalization—e.g., online and for-profit in addition to traditional postsecondary colleges and universities—as well as an online smorgasbord of credentials for personal subscriptions. For all of these, AI offers optimal procedures and curricula, increasingly the only way to keep up with exploding demand; thus, the proliferation of kinds of institutions, programs, curricula, courses and credentials is sure to continue.

The need for education will also increase, so the means of delivery and content in such a disturbed environment will require extraordinarily innovative creativity, resilience and agile adaptability among educators. The highest priority will have to be keeping up with the transformations—figuring out how best to insert the cultivation of humane values, best accomplished between human teachers (scholars, professors, practitioners) and learners, and provided by educational institutions and individuals, into the many new forms of training. Ensuring that this happens will be a major responsibility of today’s educational infrastructure and personnel, because AI does not, and does not have to, care about values. Will individual learners care? Not necessarily—for consumers of credentials, personal and even professional values apart from their commercial value are not a top priority. Whether employers will care about them is a major issue for concern by educators.

Here, the paradigm-shift in post-secondary education and training arises for attention by umbrella organizations like, for example, NEBHE and this journal. When NEBHE was founded, in 1955, the dominant paradigm in post-secondary education was referred to simply as “higher” education” (the “HE” in those acronyms)—residing in liberal arts colleges (including community colleges) and universities. The New England governors, realizing that the future prosperity of our region would be heavily dependent on “higher” education, committed their states to the shared pursuit of academic excellence in which New England was arguably the national leader.

That simple paradigm, however, has been superseded in practice. Today, the much greater variety of institutional forms and procedures, much more heavily reliant on rapidly developing technology, and the recognized need for broader inclusion of previously neglected and disadvantaged populations, calls for reconceptualization and rewording, reflecting the broader new reality of post-secondary, lifelong, continuing education and training.

New England is no longer the generally acknowledged national leader in this proliferation; the paradigm-shifting is a nationwide phenomenon. Nonetheless, though the rationale for a New England regional umbrella organization for both educational and training infrastructure has been transformed, it persists. Now it is needed to help the two branches of post-secondary human and skills development work in mutually reinforcing ways, despite the challenges—which are accelerating and growing—for both branches. Lifelong continuing education and training will be enriched, strengthened and refined by their complementary collaboration for all demographic constituencies.

How this might happen among an increasing variety of institutions still needs to be worked out in this highly fluid and dynamic environment. That is the urgent challenging mission and responsibility of the umbrella organizations. At the ground-level, individual professionals need to be reassessing their jobs defensively in humanistic directions, for which they will be fortified with a strategic sense of mission as a crucial element in the comprehensive infrastructure. Beyond that, coordinated organization will help form multiple alliances among institutions in a united front against encroaching AI robotization. This may be the only pathway for retaining roles and responsibilities by humans into the future.

George McCully is a historian, former professor and faculty dean at higher education institutions in the Northeast, professional philanthropist and founder and CEO of the Catalogue for Philanthropy.

Read More
RWhitcomb-editor RWhitcomb-editor

George McCully: Can academics build safe partnership between humans and now-running-out-of-control artificial intelligence?

— Graphic by GDJ

From The New England Journal of Higher Education, a service of The New England Board of Higher Education (nebhe.org), based in Boston

Review

The Age of AI and our Human Future, by Henry A. Kissinger, Eric Schmidt and Daniel Huttenlocher, with Schuyler Schouten, New York, Little, Brown and Co., 2021.

Artificial intelligence (AI) is engaged in overtaking and surpassing our long-traditional world of natural and human intelligence. In higher education, AI apps and their uses are multiplying—in financial and fiscal management, fundraising, faculty development, course and facilities scheduling, student recruitment campaigns, student success management and many other operations.

The AI market is estimated to have an average annual growth rate of 34% over the next few years—to reach $170 billion by 2025, more than doubling to $360 billion by 2028, reports Inside Higher Education.

Congress is only beginning to take notice, but we are told that 2022 will be a “year of regulation” for high tech in general. U.S. Sen. Kristen Gillibrand (D-N.Y.) is introducing a bill to establish a national defense “Cyber Academy” on the model of our other military academies, to make up for lost time by recruiting and training a globally competitive national high-tech defense and public service corps. Many private and public entities are issuing reports declaring “principles” that they say should be instituted as human-controlled guardrails on AI’s inexorable development.

But at this point, we see an extremely powerful and rapidly advancing new technology that is outrunning human control, with no clear resolution in sight. To inform the public of this crisis, and ring alarm bells on the urgent need for our concerted response, this book has been co-produced by three prominent leaders—historian and former U.S. Secretary of State Henry Kissinger; former CEO and Google Chairman Eric Schmidt; and MacArthur Foundation Chairman Daniel Huttenlocher, who is the inaugural dean of MIT’s new College of Computer Science, responsible for thoroughly transforming MIT with AI.

I approach the book as a historian, not a technologist. I have contended for several years that we are living in a rare “Age of Paradigm Shifts,” in which all fields are simultaneously being transformed, in this case, by the IT revolution of computers and the internet. Since 2019, I have suggested that there have been only three comparably transformative periods in the roughly 5,000 years of Western history; the first was the rise of Classical civilization in ancient Greece, the second was the emergence of medieval Christianity after the fall of Rome, and the third was the secularizing early-modern period from the Renaissance to the Enlightenment, driven by Gutenberg’s IT revolution of printing on paper with movable type, which laid the foundations of modern Western culture. The point of these comparisons is to illuminate the depth, spread and power of such epochs, to help us navigate them successfully.

The Age of AI proposes a more specific hypothesis, independently confirming that ours is indeed an age of paradigm shifts in every field, driven by the IT revolution, and further declaring that this next period will be driven and defined by the new technology of “artificial intelligence” or “machine learning”—rapidly superseding “modernity” and currently outrunning human control, with unforeseeable results.

The argument

For those not yet familiar with it, an elegant example of AI at work is described in the book’s first chapter, summarizing “Where We Are.” AlphaZero is an AI chess player. Computers (Deep Blue, Stockfish) had already defeated human grandmasters, programmed by inputting centuries of championship games, which the machines then rapidly scan for previously successful plays. AlphaZero was given only the rules of chess—which pieces move which ways, with the object of capturing the opposing king. It then taught itself in four hours how to play the game and has since defeated all computer and human players. Its style and strategies of play are, needless to say, unconventional; it makes moves no human has ever tried—for example, more sacrificing of valuable pieces—and turns those into successes that humans could neither foresee nor resist. Grandmasters are now studying AlphaZero’s games to learn from them. Garry Kasparov, former world champion, says that after a thousand years of human play, “chess has been shaken to its roots by AlphaZero.”

A humbler example that may be closer to home is Google’s mapped travel instructions. This past month I had to drive from one turnpike to another in rural New York; three routes were proposed, and the one I chose twisted and turned through un-numbered, un-signed, often very brief passages, on country roads that no humans on their own could possibly identify as useful. AI had spontaneously found them by reading road maps. The revolution is already embedded in our cellphones, and the book says “AI promises to transform all realms of human experience. … The result will be a new epoch,” which it cannot yet define.

Their argument is systematic. From “Where We Are,” the next two chapters—”How We Got Here” and “From Turing to Today”—take us from the Greeks to the geeks, with a tipping point when the material realm in which humans have always lived and reasoned was augmented by electronic digitization—the creation of the new and separate realm we now call “cyberspace.” There, where physical distance and time are eliminated as constraints, communication and operation are instantaneous, opening radically new possibilities.

One of those with profound strategic significance is the inherent proclivity of AI, freed from material bonds, to grow its operating arenas into “global network platforms”—such as Google, Amazon, Facebook, Apple, Microsoft, et al. Because these transcend geographic, linguistic, temporal and related traditional boundaries, questions arise: Whose laws can regulate them? How might any regulations be imposed, maintained and enforced? We have no answers yet.

Perhaps the most acute illustration of the danger here is with the field of geopolitics—national and international security, “the minimum objective of … organized society.” A beautifully lucid chapter concisely summarizes the history of these fields, and how they were successfully managed to deal with the most recent development of unprecedented weapons of mass destruction through arms control treaties between antagonists. But in the new world of cyberspace, “the previously sharp lines drawn by geography and language will continue to dissolve.”

Furthermore, the creation of global network platforms requires massive computing power only achievable by the wealthiest and most advanced governments and corporations, but their proliferation and operation are possible for individuals with handheld devices using software stored in thumb drives. This makes it currently impossible to monitor, much less regulate, power relationships and strategies. Nation-states may become obsolete. National security is in chaos.

The book goes on to explore how AI will influence human nature and values. Westerners have traditionally believed that humans are uniquely endowed with superior intelligence, rationality and creative self-development in education and culture; AI challenges all that with its own alternative and in some ways demonstrably superior intelligence. Thus, “the role of human reason will change.”

That looks especially at us higher educators. AI is producing paradigm shifts not only in our various separate disciplines but in the practice of research and science itself, in which models are derived not from theories but from previous practical results. Scholars and scientists can be told the most likely outcomes of their research at the conception stage, before it has practically begun. “This portends a shift in human experience more significant than any that has occurred for nearly six centuries …,” that is, since Gutenberg and the Scientific Revolution.

Moreover, a crucial difference today is the rapidity of transition to an “age of AI.” Whereas it took three centuries to modernize Europe from the Renaissance to the Enlightenment, today’s radically transformative period began in the late 20th Century and has spread globally in just decades, owing to the vastly greater power of our IT revolution. Now whole subfields can be transformed in months—as in the cases of cryptocurrencies, blockchains, the cloud and NFTs (non-fungible tokens). With robotics and the “metaverse” of virtual reality now capable of affecting so many aspects of life beginning with childhood, the relation of humans to machines is being transformed.

The final chapter addresses AI and the future. “If humanity is to shape the future, it needs to agree on common principles that guide each choice.” There is a critical need for “explaining to non-technologists what AI is doing, as well as what it ‘knows’ and how.” That is why this book was written. The chapter closes with a proposal for a national commission to ensure our competitiveness in the future of the field, which is by no means guaranteed.

Evaluation

The Age of AI makes a persuasive case that AI is a transformative break from the past and sufficiently powerful to be carrying the world into a new “epoch” in history, comparable to that which produced modern Western secular culture. It advances the age-of-paradigm-shifts-analysis by specifying that the driver is not just the IT revolution in general, but its particular expression in machine learning, or artificial intelligence. I have called our current period the “Transformation” to contrast it with the comparable but retrospective “Renaissance” (rebirth of Classical civilization) and “Reformation” (reviving Christianity’s original purity and power). Now we are looking not to the past but to a dramatically new and indefinite future.

The book is also right to focus on our current lack of controls over this transformation as posing an urgent priority for concerted public attention. The authors are prudent to describe our current transformation by reference to its means, its driving technology, rather than to its ends or any results it will produce, since those are unforeseeable. My calling it a “Transformation” does the same, stopping short of specifying our next, post-modern, period of history.

That said, the book would have been strengthened by giving due credit to the numerous initiatives already attempting to define guiding principles as a necessary prerequisite to asserting human control. Though it says we “have yet to define its organizing principles, moral concepts, or aspirations and limitations,” it is nonetheless true that the extreme speed and global reach of today’s transformations have already awakened leading entrepreneurs, scholars and scientists to its dangers.

A 2020 Report from Harvard and MIT provides a comparison of 35 such projects. One of the most interesting is “The One-Hundred-Year Study on Artificial Intelligence (AI100),” an endowed international multidisciplinary and multisector project launched in 2014 to publish reports every five years on AI’s influences on people, their communities and societies; two lengthy and detailed reports have already been issued, in 2016 and 2021. Our own government’s Department of Defense in 2019 published a discussion of guidelines for national security, and the Office of Technology and Science Policy is gathering information to create an “AI Bill of Rights.”

But while various public and private entities pledge their adherence to these principles in their own operations, voluntary enforcement is a weakness, so the assertion of the book that AI is running out of control is probably justified.

Principles and values must qualify and inform the algorithms shaping what kind of world we want ourselves and our descendants to live in. There is no consensus yet on those, and it is not likely that there will be soon given the deep divisions in cultures of public and private AI development, so intense negotiation is urgently needed for implementation, which will be far more difficult than conception.

This is where the role of academics becomes clear. We need to beware that when all fields are in paradigm shifts simultaneously, adaptation and improvisation become top priorities. Formulating future directions must be fundamental and comprehensive, holistic with inclusive specialization, the opposite of the multiversity’s characteristically fragmented exclusive specialization to which we have been accustomed.
Traditional academic disciplines are now fast becoming obsolete as our major problems—climate control, bigotries, disparities of wealth, pandemics, political polarization—are not structured along academic disciplinary lines. Conditions must be created that will be conducive to integrated paradigms. Education (that is, self-development of who we shall be) and training (that is, knowledge and skills development for what we shall be) must be mutual and complementary, not separated as is now often the case. Only if the matrix of future AI is humanistic will we be secure.

In that same inclusive spirit, perhaps another book is needed to explore the relations between the positive and negative directions in all this. Our need to harness artificial intelligence for constructive purposes presents an unprecedented opportunity to make our own great leap forward. If each of our fields is inevitably going to be transformed, a priority for each of us is to climb aboard—to pitch in by helping to conceive what artificial intelligence might ideally accomplish. What might be its most likely results when our fields are “shaken to their roots” by machines that have with lightning speed taught themselves how to play our games, building not on our conventions but on innovations they have invented for themselves?

I’d very much like to know, for example, what will be learned in “synthetic biology” and from a new, comprehensive cosmology describing the world as a coherent whole, ordered by natural laws. We haven’t been able to make these discoveries yet on our own, but AI will certainly help. As these authors say, “Technology, strategy, and philosophy need to be brought into some alignment” requiring a partnership between humans and AI. That can only be achieved if academics rise above their usual restraints to play a crucial role.

George McCully is a historian, former professor and faculty dean at higher education institutions in the Northeast, professional philanthropist and founder and CEO of the Catalogue for Philanthropy.

The Infinite Corridor is the primary passageway through the campus, in Cambridge, of MIT, a world center of artificial intelligence research and development.

 

 



Read More
RWhitcomb-editor RWhitcomb-editor

What colleges owe our democracy

The Seeley G. Mudd Building at Amherst College, the elite small liberal-arts college in the Massachusetts town of the same name that has abolished legacy admissions. The striking building, for math and computer science, was designed by Edward Larrabee Barnes and John MY Lee and Partners with Funds donated by the Seeley G. Mudd Foundation, named for a physician and philanthropist who lived from 1895 to 1968 and didn’t attend Amherst.

From The New England Journal of Higher Education, a service of The New England Board of Higher Education (nebhe.org)

WATERTOWN, Mass.

What Universities Owe Democracy; Ronald J. Daniels with Grant Shreve and Phillip Spector; Johns Hopkins University Press; Baltimore; 2021.

When the president of a major university publishes a deeply researched, closely reasoned, strongly argued powerful idea and call to the profession to respond to an urgent crisis in our national history, it is highly likely to become a classic in the literature of higher education. Ronald Daniels, president of Johns Hopkins University (co-authoring with colleagues Grant Shreve and Phillip Spector), has accomplished that with this new book, appropriately entitled What Universities Owe Democracy.

The New England Journal of Higher Education has responded to the widely recognized “epistemic crisis” in our democracy in two previous articles this year. The first, in April, unpacked the economic, technological, psychological and moral aspects of the problem, to focus on higher education’s purview of epistemology, and contended that it is incumbent upon all public educators—including journalists and jurists—from secondary schools onward, to insist that “thinking on the basis of evidence” is the only reliable way to establish and use the power of knowledge in any field. The second, in September, was a critical review of the journalist Jonathan Rauch’s recent book, The Constitution of Knowledge, asserting that he offered not a solution but part of the problem—that the epistemic crisis in public (or popular) knowledge, Rauch’s actual subject, is exacerbated by journalism’s misconceived habit of promoting as criteria of truth broad public acceptance and trust, rather than thinking on the basis of evidence.

The fundamental issue underlying both those articles—that scholars and scientists have civic responsibility—has now been addressed by Daniels, who has been previously known as a leading advocate for eliminating legacy admissions at prestigious institutions, which he did at Hopkins in 2014. His example was followed by a few others, most recently Amherst College.

The excellence of his book derives from his extraordinary idealism for higher education and the essential, indispensable role of colleges and universities in what he carefully defines as “liberal democracy.” His basic argument is that the welfare of American universities and of democracy have historically been and are strongly interdependent, so it is now necessarily in the academy’s interest to defend democracy from subversion. He poses as the “relevant question,” “How does the university best foster democracy in our society?”

His answer is painstakingly developed. Each of the book’s four chapters features careful definition of terms, a highly informative history of the chapter’s specific issue in American higher education, an analysis of its current challenges, and the author’s policy recommendations. The discussion is lucid, intellectually rigorous, and considerate of the complexities involved. This brief review cannot do justice to his detailed arguments, so I shall highlight a few points of broad interest.

A leitmotif throughout the book is what Daniels calls “liberal democracy,” which he defines in detail as an Enlightenment ideal: “liberal” in favoring individual freedom, “democracy” in promoting political equality and popular sovereignty. Whereas the two can occasionally conflict, society’s common good depends on their equitable balance. On these ideals, he writes, the United States is predicated.

The first chapter focuses on the “American Dream” of social mobility for this (in JFK’s phrase) “nation of immigrants.” Daniels elucidates the pre-eminent role universities have played in promoting it; no other institution, he says, has been throughout our history and still today more influential in that essential function. “Universities are one of the few remaining places where Americans of different backgrounds are guaranteed to encounter one another.” Therefore, colleges and universities must ensure that everything they do contributes to social mobility. This is where the issue of legacy admissions arises—about which, see more to follow.

The next chapter, “Free Minds” concerns civic education. Citizenship must be cultivated; it is not an innate trait. This used to be done by civics courses required at the high school level but in recent decades, that has languished, yielding ground to the rise of science and separate specialized disciplines. Today, only 25% of secondary schools require civic education, but because 70% of students go on to some form of postsecondary education or training, that is where, by default, civic education must be revived. Daniels advocates a “renaissance in civic learning” to reaffirm how the Founders envisioned higher education in our democracy. Noting that robust civics education is unlikely to be recovered by high schools in today’s polarized political environment, he presents a strong historical case for the inclusion of promoting democratic citizenship in higher education. Acknowledging the wide diversity of institutional types and cultures in postsecondary education today, he encourages every institution to develop its own approach.

Daniels then turns to the central role of universities in the creation, promotion and defense of knowledge, upon which liberal democracy is necessarily based. American universities have uniquely combined within single institutions their own undergraduate colleges, professional graduate schools, research facilities and scholarly publishing, protected by academic freedom and tenure. This powerful and mutually reinforcing combination has produced intellectual leadership in our liberal democracy. All this has been potently challenged, however, by developments in modern philosophy (linguistic analysis and epistemology) and more recently information technology (computers, the internet, social networks and artificial intelligence). Daniels courageously addresses these extremely complex and subtle issues (e.g., post-structuralism) in detail. His discussion is enlightening and supports his thesis that universities have a crucial role to play in intellectual leadership, “building a new knowledge ecosystem” that will protect and strengthen liberal democracy.

The next chapter “Purposeful Pluralism” discusses how colleges and universities may promote both greater diversity in their student bodies and genuine mixing of their constituencies by cultivating more inclusive communications and mutual understanding. But while greater diversification has been increased by deliberate admissions strategies, there needs to be sustained follow-through in the infrastructures of student life—in housing and rooming arrangements, dining, socializing, curricular and extracurricular settings, including faculty-student interactions and intellectual life in general. “Our universities should be at the forefront of modeling a healthy, multiethnic democracy.”

He concludes then by reviewing the overall argument, its urgency, and “avenues for reform,” which include: 1) End legacy admissions and restore federal financial aid, 2) Institute a democracy requirement for graduation, 3) Embrace “open” science’ with guardrails and 4) Reimagine student encounters on campus and infuse debate into campus programming. “The university cannot, as an institution, afford to be agnostic about, or indifferent to, its opposition to authoritarianism, its support for human dignity and freedom, its commitment to a tolerant multiracial society, or its insistence on truth and fact as the foundation for collective decision-making,” Daniels writes. “It is hardly hyperbole to say that nothing less than the protection of our basic liberties is at stake.”

While I may not completely agree with all the positions Daniels takes, I strongly believe that every academic reader will find this book highly illuminating, practically useful, and I hope, compelling. One relatively minor point of difference I have is where today’s vexed issue of legacy admissions is directly addressed. Daniels acknowledges that though the numbers of admissions decisions involved is relatively small, their symbolic significance is large, especially owing to the prominence of the institutions involved. The practice is followed by 70 of the top 100 colleges in the U.S. News rankings and, though it affects only 10% to 12% of their comparatively small numbers of students, it sends a message that is widely interpreted as elitist and undemocratic. Daniels focuses more on opposing the message than upon analyzing the practice in detail, and he provides no hard data on the process or results of the elimination of the policy anywhere.

This stood out for me as an odd departure from his usual data-intensive analytical habit. One reason for its exception is that he considers the message more important than the practical details, but another might be that data have not yet shown the abolition of legacy admissions to have significant practical impact on social mobility. Still another might be that, as I understand it, the reasons for which legacies were created are not the reasons for which they should now be abolished. They were instituted and are maintained primarily for internal institutional purposes—i.e., to encourage alumni engagement and fundraising—and not for any public message.

Here we may connect a few separate dots, not presented together in the book. Daniels abolished the practice at Hopkins in 2014 but did not announce it publicly until 2019. In that interval, he also sought and secured in 2018 a sensational gift from Hopkins alumnus Michael Bloomberg, of $1.8 billion for student financial aid. While it is understandable that Daniels would be reluctant to discuss this historical process in detail, or whether it was planned from the start and enabled by a unique advantage Hopkins had with Bloomberg as an alumnus, it is also conceivable that the Hopkins decision was not problem-free, and that the extremely generous grant was invoked as a solution.

In any case, avoiding the practical issue in the book also avoids considering a possible (though admittedly unforeseeable) solution for other institutions now—i.e., taking advantage of the unprecedentedly high multibillion-dollar gains in 2020 endowment yields and personal capital to use them separately or together to make major investments in student financial aid. This may, in other words, be an opportune time to modify legacy policies—perhaps to retain them in some refined or reduced form as an instrument supporting both student diversity and strengthening alumni relations and fundraising, while heading off the public ­impression of elitism.

The world is changing fast, and it is essential that universities keep up the pace. Political reform is slow and now especially cumbersome, whereas the only impediment to universities adapting and leading is the will to do so. That is where a book such as this can exert palpable influence, and considering how rare it is for such a book to be written, are we not in turn professionally obliged at least to read and think about it?

George McCully is a historian, former professor and faculty dean at higher education institutions in the Northeast, professional philanthropist and founder and CEO of the Catalogue for Philanthropy, based in Watertown, Mass.

Read More
RWhitcomb-editor RWhitcomb-editor

George McCully: Higher education in crisis and a paradigm shift

diploma.jpg

Via The England Journal of Higher Education, a service of The New England Board of Higher Education (nebhe.org)

BOSTON

Discussions of the problematic future of higher education were already an exploding industry before COVID-19, producing more to be read than anyone could possibly keep up with. Their main audience was academic administrators and a few faculty, worrying where their institutions and careers were headed, and wanting guidance in strategic decision-making—helping to identify not only where they actually were and were going, but also where they might want to go. Experiments were everywhere, momentous decisions were being made, and there were no signs of any problem-solving consensus.

Into that pre-coronavirus maelstrom came Bryan Alexander’s Academia Next: The Futures of Higher Education (Johns Hopkins UP, 2020). Alexander, whose doctorate is in English literature, took care to detail his qualifications and previous experience in futurist studies, and is described in the flyleaf as “an internationally known futurist, researcher, writer, speaker, consultant, and teacher,” currently senior scholar [adjunct] at Georgetown University, founder of the online “Future of Education Observatory” and author of The New Digital Storytelling: Creating Narratives with New Media, and Gearing Up for Learning Beyond K-12.

We note the plural “Futures,” which is commendable because Alexander addresses the wide variety of institutions, from major research universities and state university systems to community colleges and the full range of private liberal arts colleges, each group with its own distinctive future. Alexander’s stated preference for the word “forecast,” as with weather, over “prediction,” as with science, is also appropriate. The method and structure of his book is presented as conventional futurism: to identify “trends,” from them to artfully project multiple “scenarios,” from which to draw conclusions. This is clearly not science—but more about methodology to follow.

The strongest part of the book is the first, which exhaustively details “trends,” or more accurately “innovations,” for whether they are actually historical “trends” is not critically addressed. Moreover, nothing is said about the central issue of scholarship itself, widely recognized as being a major problem—for example, the obsolescence of traditional (mostly 19th Century) multiversity academic disciplines in this century, and the innumerable searches for new strategies and structures. The temporal range of Alexander’s forecasting vision is short: 10 to 15 years, but even so, the imagined “Scenarios” section suffers from rhetorical excess and a lack of carefully analyzed pathways telling us how the innovations might become “trends,” and those might become “scenarios.” The weakest part is the last, purportedly on conclusions, but failing to connect today to tomorrow, or to reach any very helpful conclusions.

Subverting all this however are two fundamental flaws, which the book shares with conventional futurist methodology: first, its tacit assumption that historical change is a consistently evolutionary process; and second, the lack of a precise understanding of historical causation.

Futurist studies arose as a field in the last half of the 20th Century, a relatively stable postwar historical period. Alexander’s assumptions reflect this: “In general, the future never wholly eradicates the past. Instead, the two intertwine and influence each other.” This approach is less well suited, and sometimes not suited at all, to periods of revolutionary change, especially if that is widespread and accelerating, as it is today.

Stable periods of history, whether in particular fields (e.g., sciences, technologies, business, scholarship, higher education, etc.) or in general, derive their order from paradigms, that is, established models governing mature fields of activity. Revolutionary change occurs when a paradigm is overthrown or replaced by unordained means, producing an alternative, incompatible one—in politics for example by an unconstitutional change in the constitution of a polity. This distinctive kind of historical change—”paradigm shift”—usually concerns only individual fields, but in the 21st Century we happen to be living in a highly exceptional entire period of paradigm shifts, powered by the revolution in information technology (IT)—computers and the internet. In higher education, paradigms were shifting even before the pandemic, already invalidating forecasts.

Periods of paradigm shift

Periods of paradigm shift are rare—by my count only four in 2,500 years of Western history. The first was the rise of Classical Western civilization itself, extending roughly from Periclean Athens to the fall of Rome—about 1,000 years. The second was the rise of medieval Christian civilization extending from there to the Renaissance and Reformation—another 1,000 years. The third was the “early-modern” period from the Renaissance to the Enlightenment (also incidentally driven by an IT revolution—Gutenberg’s), including the scientific revolution, global discoveries, the emergence of nation-states and secularization—about 300 years, codified by the familiar 19th Century formulation that Western history had three main periods: ancient, medieval and modern.

Today, however, we are entering a fourth great period—signaled by the ubiquity of paradigm shifts and the fundamental issues it is raising, for example, with AI, robots and what it is to be human. The character of our new age is not yet defined as it is still taking shape, but it may become relatively established in only decades, owing to the vastly increased and accelerating power of technology. In short, even before the pandemic, higher education as an emphatically information-intensive field was undergoing its own IT-revolutionary paradigm shifts, amid other paradigm shifts all around it. For such a period, conventional futurist methodology and forecasting are not well suited; Anderson’s book is unaware of all this.

Causation: how it works

A second fundamental flaw is revealed by the book’s tendency to skip over transitional processes—how innovations become trends, trends yield scenarios, and scenarios reach conclusions. We are not told how these happen, or how they work as historical bridges. Nor are the transitions informed by any disciplined understanding of causation, both as a phenomenon and as an instrument of influence or management. The lack of thought about causation is understandable because it is common even among historians, who tend to be more empirical than theoretical because history is so complex. Nonetheless, a deeper and more precise understanding may clarify this discussion.

Consider: Everything and everybody in the world is an element in history—participating in events and developments, what historians study. Each is defined by a limited range of possible roles or activities, to which it is inclined to be conducive, exerting influence. Chairs, tables, boats, tools, chickens, etc., are known by us according to what they are and do, both actually and potentially. They both exist and are potentially conducive to qualifying or influencing their circumstances in the world around them.

Combinations of historical elements therefore also have limited ranges of mutual cooperation—where their respective potentials and influences overlap, and to which they are mutually conducive. Mutual influences—alliances, collaborations, cooperations—are generally more powerful than individual influence. People and institutions are more powerful together than apart. A chair and table in the same room with a person are more likely to be used together than separately or not at all.

Therefore when combinations occur in time and place, the probabilities that their mutual influences will actually happen increase, other things being equal. This is significant for leadership and management, because it means that by intentionally combining elements—”piling up the conducives”—we can increase our influence on events, promoting and helping to cause certain intended results to happen.

Causation in history may therefore be defined as the “coincidence of conducive conditions”, which produces the result studied or sought.

There are several fairly obvious caveats, however: a) elements and combinations vary in power and potential; and b) elements and combinations thereof can be partially or totally opposed to each other as well as mutually reinforcing. History and its study are extremely complex.

Therefore every historical event or development results from complex combinations of influential factors—causes, qualifiers and impediments. Historians identify and describe the activities and influences of various factors in order to illuminate and explain how events and developments happened. Planners, strategists and managers can likewise identify and use the relevant factors, to make desired events happen, to produce desired results—piling up the conducives and qualifiers, and eliminating, neutralizing, or avoiding the impediments, while ignoring the immaterial. Current events in our country and in higher education offer rich examples for this.

In the midst of one or more paradigm shifts, strategic and tactical planning are further complicated by the fact that the normal processes of change are themselves being violated—avoided, transformed and superseded. Thomas Kuhn, who coined the idea with reference to the Copernican Revolution in science, believed that the results of paradigm shifts are impossible to predict until late in the process—often too late for management. We should also acknowledge that the complexity of history has not yet been reduced to systematic scientific understanding; the study and understanding of history is still more an art than a science.

Higher education in crisis

But now let us consider the already deeply problematic crisis of early 21st Century higher education, into which came coronavirus—a universal disrupter par excellence, leaving no institution or custom unchanged, imposing radical doubts about the future, and in particular forcing re-inventions of traditional practices under new and still unsettled current and future constraints.

There is a key difference between the pre-corona paradigm shifts and those imposed by COVID-19: Whereas the former are a reconstructive phenomenon, driven by the overwhelming power of the IT revolution in every information-intensive field, COVID-19 is an entirely destructive phenomenon, offering no constructive alternative to its victims. What happens when two transformative “conducives”—one constructive, one destructive—collide, especially in an age of paradigm shifts?

So far, the combined effects have been mixed—containing both constructive and destructive parts, as the two forces increasingly coincide. Certainly the rapid and forceful push of often-recalcitrant faculty into socially distanced online instruction is an acceleration of a clearly developing trend under the new IT; but as its effects ramify throughout the problematic business models, residential systems, admissions processes, courses, curricula and even architecture, of diverse colleges and universities, academic administrators have no reliable idea yet what or how viable new institutions might rise from the rubble.

Education vs. training

We need to be clearer than we have been about what values and issues are at stake. Not so long ago, back in the day when I was a student, we had a clear distinction between “education” and “training.” The former referred to the ancient tradition of liberal education, whose focus was self-development, for human fulfillment. Training, by contrast, was the development of technical knowledge and skills, with a focus on professional employment. “Higher education” came after school education, to prepare students for who they would become as human beings in later life; training prepared students for what they would become professionally in jobs and careers—what occupational and societal roles they would play. Undergraduate years were to be devoted to “higher education” and postgraduate studies to focus on professional technical training—law, medicine, architecture, business, research, teaching, etc.

That paradigmatic distinction and practice has obviously been blurred since then by commercialization. Soaring tuition costs and student loan indebtedness tied ever more closely to preparation for future jobs and problematic careers in an increasingly “gig” economy, have forced the flow of student enrollments and funding away from liberal education and the humanities to more immediately practical and materialistic courses, disciplines, curricula and faculty jobs. This has led students and their parents to see themselves as retail consumers, calculating cost-effectiveness and monetary return-on-investment in the training marketplace. Terminology has followed, so that gradually “higher education” and “training” have become virtually synonymous, with training dominant.

The forced  mass movement to online learning and teaching involves radically different participation, financing and business models. It is increasingly clear that their concurrence and connection with artificial intelligence, big data and the gig economy—and with course offerings often segmented for practical convenience—has been building an extremely powerful “coincidence of conducives” that might complete the transit from education to training that has been going on for the last half-century. If so, this could spell for all practical purposes an end to higher education for all but a few very wealthy institutions.

This paradigm shift has operated to the detriment of both education and training, but more dangerously for education. Recent surveys have shown that from 2013 to 2019, the portion of adults regarding college education as “very important” has declined from 70% to 51%; a majority of younger adults ages 18 to 29 now consider getting a job to be the primary purpose of earning a college degree, and they, who are purportedly its beneficiaries, are also the most likely to question its value. Moreover, because online instruction is more readily suited for training than for education, institutions of higher education face stiff competition in credentialing for jobs from specialized for-profit corporations and employers themselves, in effect shoving colleges and universities aside, rendering their dominance in the crucial years of early adult maturation superfluous and obsolete.

Conflict resolution

In short, the “coincidence of conducive conditions” for the demise of what used to be called “higher education” is now actively in place, and with the power of the pandemic behind it, the timing is ripe. Reversal is now impossible. We need to ask whether survival is still possible, and if so, how to cause it—how to identify and mobilize sufficient counter-conducives and qualifiers at least to avoid destruction and to achieve some sort of synthesis of both training and education.

The range of possibilities and probabilities is huge, far wider than can be summarized here. But one strategic possibility might be opportunistically to take advantage of the universal disruptive flux as opening up previously foreclosed possibilities—specifically, to reinstitute the traditional distinction between training and education and to combine both at the college level in courses and curricula. The value of the traditional definitions is that they constitute an inevitable complementary and mutually reinforcing bonded pair—developing both who and what students will necessarily become for the rest of their lives. How to combine them will be an unavoidable faculty responsibility, empowered and reinforced by administrative reforms in financial and business models. The result will constitute a radical re-invention of colleges and universities, featuring a rebirth, at long last, of humanistic higher education.

George McCully is a historian, former professor and faculty dean at higher education institutions in the Northeast, then professional philanthropist and founder and CEO of the Catalogue for Philanthropy.

Read More
RWhitcomb-editor RWhitcomb-editor

George McCully: Don't try to turn colleges into technical schools to feed business

The annual Leadership Summit  of the New England Board of Higher Education set for Oct.  17 scheduled for this coming October poses the question, “How Employable Are New England's College Graduates, and What Can Higher Education Do About It?”

The Summit will address numerous well-chosen, commonly current questions in and around this topic, predicated on the assertion that “New England employers consistently claim that they can't find sufficient numbers of skilled workers—especially in key tech-intensive and growth-oriented industries like information technology, healthcare and advanced manufacturing.” The strategic questions are, “Is higher education to blame? Are our colleges and universities still operating in "old economy" modes, in terms of services, practices and strategies for preparing students for career transitions and employability?” And, “Can New England's colleges and universities be the talent engine that they need and ought to be?”

The following addresses the factual premises, their historical context, and strategic issues, in a constructive attempt to clarify and enrich the discussion at the Summit.

First, as to the facts on employability: It is commonly believed, but incorrectly, that today’s college graduates have high unemployment; a recent study found that compared to other age and education cohorts, they actually have the lowest rate of unemployment—about 2 percent. Moreover, today’s job market operates on a new model of employment—the so-called “on-demand” or “gig” economy of short-term jobs perhaps interspersed with underemployment. The Summit needs to begin, therefore, with everyone on the same page with current data on unemployment, employment and underemployment.

As to history, this whole discussion arises from the confluence of two massive trends: the information technology (IT) revolution and the soaring, excessive costs of college matriculation.

The IT revolution, as we all know, is transforming all areas of life and enterprise, at an accelerating pace. It is now in what Steve Case, founder of AOL, calls its “Third Wave,” progressing rapidly from the “Internet of things” to the “Internet of everything.” As this revolution has gained speed and momentum, technological turnover has accelerated and pervaded job markets, so that everyone has now to run and jump to keep up with it. A large part of employers’ difficulty in filling jobs with suitably skilled employees is a side-effect that has become the new normal in high-tech businesses. That will not change, and cannot be blamed on colleges and universities; whether they can realistically be expected to do anything meaningful about keeping up with and advancing it is an open question.

The concurrent soaring college and university costs—and huge loans to help cover them—has made parents and students increasingly concerned about affordability, student indebtedness and practicality. This has had commercializing effects on college and university cultures, in which students and their parents consider themselves increasingly as consumers purchasing credentials for continued financial support and jobs.

Simultaneous grade-inflation, reduction of onerous study workloads, anxiety over what professors want rather than what students should want for themselves, excessive grade-consciousness, and questioning whether the investment is worthwhile, often boils down to a vicious circle: whether the investment will lead to a steady job that will enable paying off the loans.

The combination of these two trends is the dangerous situation we have today. If the job market is in constant, rapid and accelerating turnover so that jobs and even careers become short-term investments in ephemeral results by both employers and employees—and if the culture of colleges and universities is commercialized, operating as an investment in job security—how can colleges and universities, as relatively sluggish institutions already behind the curve, possibly now be expected to provide rapid-turnover kinds of training for rapid-turnover jobs?

Even if they succeed in training students for today’s job market, that same training will become obsolete tomorrow, and then what will the investment have been worth? How can New England’s colleges and universities, caught in this crunch, be presumed to have any real or viable “need” or obligation to be “the talent engine” for current or future job markets?

Here it is strategically useful to distinguish clearly between “education” and “training.” “Training” is “knowledge and skills development” and is the focus of this discussion; “education” is “self-development”, which is what our colleges and universities were created to do, as in the Classical tradition of liberal education. Education certainly includes training, but is both broader and deeper, intensely personal and social—focusing on the cultivation of values. Education is more about who ,training is more about what, students are and will become in their subsequent lives and careers.

It has long been conventionally accepted that the mission of “higher” education in colleges and universities, as distinct from that in schools, is to bring training in disciplined scholarship to bear on the cultivation of personal values, as in liberal education. This is not something that goes in and out of fashion with changes in economies or technologies. While the training function needs to be currently in tune with useful knowledge and skills in fast-changing technology and job markets, the challenge of keeping au courant is real, but always subordinated to the permanent and characteristic mission of higher education.

Here, modern technology itself can help. Training these days is done most productively and efficiently by computers and the Internet, as has been conclusively demonstrated by MOOCs. Obviously the employers who are complaining about the technical preparedness of prospective hires, know best what training (knowledge and skills) they want those new hires to have. They happen also, however, to be in the best position to provide it themselves.

Case (incidentally, a graduate of Williams College), in his book The Third Wave, put it succinctly: Let higher education develop character—which, he advises, is what the innovating entrepreneurs should be looking for in hiring—and let the businesses then train for the special skills they currently and prospectively need. MOOC-style courses could be the instrument of choice for such training; highly flexible, cost-effective, and productive, they can be quickly developed by anyone for any subject and trainee population, at minimal costs, and readily superseded as needs change.

Can colleges and universities help address this employment problem generated by the technological revolution? Yes—they might at tolerable cost to themselves (perhaps supported by businesses who, after all, need the workforce), incentivize this training with (limited) credits toward degrees for online MOOC training; they might provide various certifications apart from degree credits for MOOC students. They might open room and board facilities to MOOC enrollees, especially in summer or other off-season months, at least partially supported by the businesses needing them. They might provide to MOOC trainees a range of supplementary educational support services by adjunct faculty. Adjuncts might assist with running MOOCs, and businesses might have their MOOC instructors appointed as adjunct members of the faculty, if the cost-sharing could be worked out.

There is a wide variety of facilitating and affiliating options for training, short of undertaking full responsibility. But in this whole context, the suggestion that New England’s colleges and universities should assume, or be expected to assume, responsibility for supplying technically prepared employees to businesses, is an idea that is close to absurd and dead on arrival.

George McCully is a former historian, professor and faculty dean at higher education institutions in the Northeast, then professional philanthropist and founder and CEO of the Catalogue for Philanthropy. This piece first ran in the news Web site of the New England Board of Higher Education (nebhe.org)

 

Read More