60.2 F
Los Angeles
Thursday, May 16, 2024

Generative Artificial Intelligence – Opportunities and Challenges

Last April, Google executive James Manyika patiently explained to 60 Minutes host Scott Pelly that the company’s “Bard” generative AI platform had recently read “most everything on the Internet” and self-created a language model with almost magical predictive capabilities that appears to possess the sum of human knowledge.

Bard, and other Large Language Models (LLMs) such as ChatGPT, respond to plain language questions or prompts to generate “the next most probable word” over and over, one word at a time, producing complex, grammatically correct text in multiple languages at warp speed.

While the LLM was not expressly trained to write computer code, enough code exists on the internet that the model trained itself. Now a proficient individual can input a set of business requirements, request a desired computing language, and sit back and watch as highly serviceable computer code emerges in a matter of seconds.

These LLMs are just one form of generative AI (GAI) platform, with others that have been trained on sensory data, producing astonishing new capabilities to create realistic images, videos, pattern-mapped voice translations, and much, much more.

During the past 10 months, we have watched commercial and industrial applications of GAI emerge with an eye toward understanding the skills necessary for success in the workforce of the future – where up to 70% of all jobs, in every industry, are expected to depend on it. It has been an exhilarating, humbling, and profound experience characterized by two conflicting and simultaneous reactions to just about every use case we encountered: “This is utterly unfathomable and entirely inevitable.”

The commercial and societal upside is simply too high, too ubiquitous, and too certain to put the genie back in the bottle. Instead, we must understand the risks and mitigate them as best we can through a combination of industrial restraint, internationally consistent and effective regulation, and political will.

Those of us at the CSUN David Nazarian College of Business and Economics must also meet our responsibility head-on to prepare our students for the workforce of the future, a future certain to be dominated by GAI.

DIGGING A LITTLE DEEPER

GAI can be thought of as a large brain trained on massive amounts of data, including books, articles, and websites. During training, it learned the structure of language, nuances, grammar, facts, and reasoning patterns. When presented with a question, it infers, generating content – whether text, images, or video – by leveraging patterns it has learned. GAI also remembers what was said earlier, and when responding can incorporate previous comments or contexts. This “multi-step” thinking allows GAI to solve complex problems and achieve remarkable feats. It represents a dramatic advance over previous forms of AI, which relied on smaller datasets, used rule-based programming, had shallow memories, and focused on classifying existing content rather than engaging in the creative process.

The implications for industry and society could be dramatic. GAI-powered doctors could diagnose diseases, recommend treatments with incredible accuracy, and operate remotely, giving everyone access to high-quality healthcare. Instructors armed with GAI-powered tools and 24/7 GAI tutors could deliver personalized learning to every student at scale, helping each realize his or her full and individualized potential. Fully autonomous planes, ships, and spacecraft could carry people safely and efficiently. GAI could manage decentralized financial systems holistically, ensuring economic stability and growth. GAI-driven smart grids could optimize energy usage among interconnected sectors, integrate renewable resources, and ensure constant power. GAI-powered robots could clean up oceans, forests, and cities, targeting and removing waste efficiently.

SOCIETAL IMPLICATIONS

The potential impact on society could be enormous, but risks abound:

• Misinformation and disinformation through highly polished, believable deepfakes, counterfeit documents, and even “hallucinations” which confidently produce facts and sources that never existed.

• Privacy, identity theft, and impersonation are all threats that we have faced before, but never with the capacity to project images, voice, and body language so realistically as to be able to trick even close family members.

• Threats to domestic safety and national security in the form of machines who can train themselves to learn and execute autonomous action within complex systems such as utilities, transportation infrastructure, and defense.

• Ethical concerns in various and sundry forms, including data biases, systemic discrimination, abuses in fields, such as criminal justice or healthcare delivery, or other prejudicial threats across wide swaths of society and life.

REGULATION – NECESSARY BUT CHALLENGING: HOW HARD WILL IT BE?

The good news is that top industry players are themselves very worried about all of these risks. Their individual and collective pleas to be regulated are loud, persistent, and unprecedented at this early stage of an industrial transformation.

The bad news is that regulation always lags technology, and GAI’s astonishing adoption curve has given it a huge head start. Moreover, with a technology that is itself “intelligently adaptable,” it may prove easier to regulate forensically, than preemptively. That is a bad prescription for a technology where Elon Musk has projected the probability of a doomsday event as “not zero.”

There is an enormous amount of work to do:

• Define risks, mitigations, and standards in both international organizations and sovereign state jurisdictions, with varying and often conflicting political, economic, and values-based motivations.

• Balance innovation and safety, with processes and testing protocols in an industry where the underlying technology is prone and capable of creating its own “emergent behaviors.”

• Establish a regulatory infrastructure with transparency, interpretability, judgments, and the adjudicatory and enforcement process all stressed by the inherent intellectual inaccessibility of neural networks, machine learning, and extreme mathematics.

• Begin the long and messy process of encoding these newly considered ethical concerns into statutes while navigating a litigation environment short on case law for GAI disruptions in numerous areas.

THE DAVID NAZARIAN COLLEGE OF BUSINESS AND ECONOMICS AT CSUN

Five years ago, the Nazarian College settled on an approach to address the constant, rapid-cycle prominence of technologies with disruptive potential – “Data First.” This decision reflected a belief that we would never be able to predict the technological winners and losers, but we could build curriculum and co-curricular programming around the source code for all these cognitive technologies – Data.

Since then, the national rise of legitimate employer concerns regarding skill gaps among fresh college graduates prompted us to articulate a new approach: “Professional Education Beyond a Degree.” Within it, our students earned more than 1,800 skills-based certifications during the 2022-23 academic year, as either academic syllabi assignments, or in conjunction with varied and robust professional development programming.

Next month we will launch a series of nine, skills-based boot camps in an initiative funded by the JPMorgan Chase Foundation and centered around industry-created, micro-credentialing certificates distributed on the Coursera Academy platform.

Our signal to our students and employers alike: Our graduates will be ready and valuable, with current skills, on Day 1 for that first job, but we will not compromise on our commitment and responsibility to prepare them for a long-term career, indeed for life. Beyond the skills, our graduates will possess a solid grounding in a traditional discipline, within a broad curriculum centered on learning outcomes that include critical thinking, effective communication, accomplished teamwork, substantive experiential learning, and diversity and inclusion excellence.

In so doing, we intend to impart the three most important skills students will need in the workforce of the future: 1.) Confidence, competence, and intentionality in their own use of generative AI – and of all cognitive technologies; 2.) A refined capacity and deep commitment to life-long learning; and, 3.) The ability to discern the truth in the brave new world in which they will thrive.

Deone Zell, Ph.D., is a professor of management at the CSUN David Nazarian College of Business and Economics. Bob Sheridan serves as the executive director of its Center for Career Education and Professional Development.

Featured Articles

Related Articles

By DEONE M. ZELL, PH.D., and ROBERT J. SHERIDAN Author