Illustration of two faces facing each other. The one on the right has a real eye. The one on the left has a computer pointer for an eye, indicating it's computer generated. The words Augmented Intelligence are overlaid onto the faces.

Augmented Intelligence

The generative artificial intelligence revolution is already happening in the workplace—and it looks nothing like you’d expect.

Since ChatGPT went mainstream this year, many of the news stories about generative artificial intelligence have been full of gloom, if not outright panic. Cautionary tales abound of large language models (LLMs), like ChatGPT, stealing intellectual property or dumbing down creativity, if not putting people out of work entirely. Other news has emphasized the dangers of generative AI—which is capable of responding to queries by generating text, images, and more based on data it’s trained on—such as its propensity to “hallucinate” wrong information or inject bias and toxic content into chats, a potential legal and PR nightmare.

Beyond these legitimate fears, however, many companies are adopting generative AI at a fast clip—and uses inside firms look different from the dire predictions. Companies experimenting with AI have discovered a powerful tool in sales, software development, customer service, and other fields.

On the leading edge of this new frontier, many Berkeley Haas faculty and alumni are discovering how it can augment human intelligence rather than replace human workers, aiming toward increased innovation, creativity, and productivity.

“We’re used to thinking of AI as something that can take repetitive tasks, things humans can do, and just do them a little faster and better,” says Jonathan Heyne, MBA 15, chief operating officer of DeepLearning.AI, an edtech company focused on AI training, who also teaches entrepreneurship at Haas. “But generative AI has the ability to create things that don’t exist—and do it through natural language, so not only software programmers or data scientists can interact with it. That makes it a much more powerful tool.”

More jobs, new jobs

Those capabilities make gen AI ideal for summarizing information, extracting insights from data, and quickly suggesting next steps. A report by a team of researchers from OpenAI, OpenResearch, and the University of Pennsylvania concluded that for 80% of workers, LLMs could affect at least 10% of their tasks, while 20% of workers could see at least 50% of their tasks impacted. Another report by Goldman Sachs predicts two-thirds of jobs could see some degree of AI automation, with gen AI in particular performing a quarter of current work, costing up to 300 million jobs in the U.S. and Europe alone. Yet, the report adds, worker displacement for automation “has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth.”

That’s in line with the findings of Assistant Professor Anastassia Fedyk, whose research has found that AI has been leading to increased sales and employment. In a forthcoming paper in the Journal of Financial Economics, Fedyk and colleagues found that firms’ use of AI led to increased growth for companies through more innovation and creation of new products, which increased both sales and hiring.

Fedyk says that industries with particularly AI-related tasks, such as auditing, could see reductions in workforce over time. For most fields, however, she predicts that the workforce will stay steady but its composition will change. Her new National Bureau of Economic Research working paper studying employment at companies investing in AI found that they were looking for a workforce that was even more highly skilled, highly educated, and technical than other firms. “We’re seeing a lot of growth in jobs like product manager—jobs that help to manage the increase in product varieties and increase in sales,” Fedyk says.

Illustration of a typewriter with computer code on paper and keyboard.An explosion of possibilities

Company conversations about gen AI exploded this spring, says Amit Paka, MBA 11, founder and COO of Fiddler AI, a five-year-old startup that helps firms build trust into AI by monitoring its operation and explaining its black-box decisions. “Generative AI became a board-level conversation,” he says, “even if folks in the market don’t know how they’ll actually implement it.” For now, firms seem more comfortable using gen AI internally rather than in customer-facing roles where it could open them up to liability if something goes wrong.

Obvious applications are creative—for example, using it to generate marketing copy or press releases. But the most common implementations, says Paka, are internal chatbots to help workers access company data, such as human resources policies or industry-specific knowledge bases. More sophisticated implementations are models trained from scratch on a set of data, like Google’s Med-PaLM, an LLM to answer medical questions, and Bloomberg’s BloombergGPT, trained on 40 years of financial data to answer finance questions. Deciding what type of LLM to implement in a company is a matter of first figuring out the problem you need to solve, Paka says. “You have to find a use case where you have a pain point and where an LLM will give you value.”

For now, firms seem more comfortable using gen AI internally rather than in customer- facing roles where it could open them up to liability if something goes wrong.

The power of video

While many companies are already using gen AI to analyze and generate text, video applications are next. Sunny Nguyen, MBA 18, is lead product manager for multimodal AI at TwelveLabs, which recently launched Pegasus, a video-language foundation model that uses gen AI to understand video and turn its content into summary, highlights, or a customized output. “Video understanding is an extremely complex problem due to the multimodality aspect, and lots of companies still treat videos as a bunch of images and text,” Nguyen says. “Our proprietary multimodal AI is aimed at solving this challenge and powering many applications.” For example, sports leagues could use the technology to generate game highlights for fan engagement; online-learning publishers could generate chapters or highlights instantly; and police officers could get accurate, real-time reports of suspicious activity.

TwelveLabs is launching an interactive chat interface where users could ask questions in an ongoing dialogue about a video. “Just like ChatGPT but for video,” Nguyen says.

Norberto Guimaraes, MBA 09, cofounder and CEO of Talka AI, is focusing video analysis on business-to-business sales conversations, using gen AI to analyze not just verbal content but nonverbal cues as well. Guimaraes says nonverbal factors can account for up to 80% of the impact made in a conversation. Talka’s technology uses AI to analyze 80 different signals, including facial expressions, body language, and tone of voice to judge whether a conversation is achieving its purpose, usually of completing a sale.

Guimaraes says the technology could be used to train salespeople to communicate more effectively and discern clients’ needs. “We’ll be better able to understand what are the key frustrations from your customer, whether you’re taking into account what they’re saying, and whether or not the conversation is landing,” he says.

Computer programmers have begun implementing more formal techniques in a new field called AI fairness, which employs mathematical frameworks based on social sciences to de-bias embedded data.

Talka AI is currently testing the technology with a “very large” company that is “one of the best known for sales,” Guimaraes says. It currently has 70,000 conversations in its system and has been able to successfully predict whether a sale will occur 85% of the time.

Illustration of woman at a laptop shaking a hand coming out of the computer.Sales and service

Companies are also exploring the use of AI to take part in simple sales. Faculty member Holly Schroth, a distinguished teaching fellow who studies negotiations and influence, has consulted with the company Pactum, which has been working on an AI tool to manage low-level sales—repetitive negotiations that have just a few different issues such as length of contract, quantity, and price. In initial studies, Pactum has found that people prefer talking to AI versus a human. “People like talking with a bot because it’s kinder and friendlier,” says Schroth, “because it can be programmed that way.”

Specifically, AI bots can be programmed to use language that acknowledges what the other side is saying. “Humans sometimes get frustrated and may not be aware of the language they use that may be offensive,” says Schroth. “For example, ‘with all due respect’ is at the top of the rude list.” People may feel like they can get a better deal with AI, she says, since the bot will work to maximize value for both sides, while a human may not be able to calculate best value or may let emotions interfere.

AI is also perfectly positioned to be a coach, says Assistant Professor Park Sinchaisri. He’s explored ways AI can help people work more efficiently whether they are Uber drivers or physicians. In today’s hybrid environment, where workers are often remote without the benefit of on-the-job training or peer-to-peer learning, a bot can learn best practices from colleagues and identify useful advice to share with others. AI could also help human workers redistribute tasks when a team member leaves. However, Sinchaisri has found that while AI provides good suggestions, humans can struggle to adopt them. In his working paper on AI for human decision-making, workers accepted only 40% of machine-generated suggestions compared to 80% of advice from other humans, citing they did not believe the AI advice to be effective or understand how to incorporate it into their workflow.

Sinchaisri is studying ways to make coaching more effective—either by training the AI to give only as much advice as the person might accept or by allowing for human nature. “Research has shown that humans tend to take more advice if they can modify and deviate from it a little,” he says. “Good advice is often counterintuitive, meaning it is difficult for humans to figure it out on their own; AI needs to learn how to effectively deliver such advice to humans to reap its full potential.”

Illustration of a hand holding a pencil with a computer pointer icon at the tip.Bias and ethics

As powerful and versatile as AI can be, the warnings are real. Trained on the vastness of the internet, large language models pick up toxic content and racist and sexist language. Then there’s the real problem of hallucinations, in which AI output seems believable but includes false information.

Biases are baked into LLMs, says Merrick Osborne, a postdoc at Haas studying racial equity in business. In a new paper on bias and AI, Osborne explores how biased information results not only from the data a model is trained on but also from the engineers themselves, with their natural biases, and from the human annotators whom engineers employ to fine-tune and subjectively label data.

“You need to create a culture of accepting that generative AI is useful in many stages of work and encouraging people to be transparent with their co-workers about how they’re using it.”

—David Evan Harris

Certainly more diversity in the field of engineering would help. But it’s important, Osborne argues, that engineers and annotators undergo diversity training to make them more aware of their own biases, which in turn could help them train models that are more sensitive to equal representation among groups. Computer programmers have begun implementing more formal techniques in a new field called AI fairness, which employs mathematical frameworks based on social sciences to de-bias embedded data. “We aren’t born knowing how to create a fair machine-learning model,” Osborne says. “It’s knowledge we have to acquire.”

Another way Osborne suggests addressing both bias and hallucinations is to call in outside help. Vijay Karunamurthy, MBA 11, is doing just that as field CTO at Scale AI, a seven-year-old startup that’s worked to make models safer and fairer. “People understand that models come out of the box without any sensitivity or human values, so these base models are pretty dangerous,” he says. Scale AI employs teams of outside experts, including cognitive psychologists with backgrounds in health and safety, who can help decide what information would be too dangerous to include in an LLM—everything from teaching how to build a bomb to telling a minor how to illegally buy alcohol. The company also employs social psychologists, who can spot bias, and subject experts, such as PhDs in history and philosophy, to help correct hallucinations.

Of course, it’s not feasible to have hundreds of PhDs constantly correcting models, so the company uses the information to create what’s called a critique model, which can train the original model and make the whole system self-correcting.

For companies adopting AI, it’s important to develop internal processes to help guide ethical use by employees. One of those guidelines, says faculty member David Evan Harris, a chancellor’s public scholar, is disclosure. “People have a right to know when they’re seeing or interacting with generative AI content,” says Harris, who was formerly on the civic integrity, misinformation, and responsible AI teams at Meta. That goes for both internal use and external use with customers. “When you receive content from a human you probably have more reason to trust it than when it’s coming from AI because of the propensity of the current generation of AI to hallucinate.” That’s especially true, he says, when dealing with sensitive data, like financial or medical information.

Companies may also want to control how gen AI is used internally. For example, Harris says there have been numerous cases in Silicon Valley of managers using it to write peer reviews for regular performance evaluations. While a tempting shortcut, it could result in boilerplate verbiage or, worse, wrong information. Harris says it’s better to come up with new strictures for writing reviews, such as using bullet points. On the other hand, banning AI is unlikely to work. “You need to create a culture of accepting that generative AI is useful in many stages of work and encouraging people to be transparent with their co-workers about how they’re using it,” he says.

One practice to avoid when crafting internal policies around gen AI is to limit governance programs to the letter of the law—since it tends to lag behind ethics, says Ruby Zefo, BS 85, chief privacy officer at Uber. “The law should be the low bar—because you want to do what’s right,” says Zefo. “You have to create policies and programs and documentation that will put you on the right side of the laws you know are coming but aren’t yet here.”

For one, that means developing guidelines around personal or confidential data—both being sure to recognize and refrain from using other’s personal or proprietary information to train the model and to refrain from feeding such information into a model that is or might become public. When running algorithms on personal data for customers, she adds, it’s important to allow for human review. Companies should also control access to internal gen AI models to only those who have a legitimate purpose. More than anything, Zefo says, flexibility is key while the technology is still being developed. “You have to have a process where you’re always evaluating your guidelines, always looking to define what’s the highest risk.”

Illustration of a circuit board snake slithering toward a pair of feet.Planning for the future

That need to stay nimble extends to the workforce as well, says Heyne. In the past, AI was mostly used by technical workers programming models—but gen AI will be used by myriad employees, including creative, sales, and customer-service workers. As gen AI develops, their day-to-day work will likely change. For example, a sales agent interacting with a bot one day may be overseeing a bot negotiating with an AI counterpart the next. In other words, sales or procurement functions in an organization will remain but will look different. “We have to constantly think about the tasks we need to train for now to get the value that is the goal at the end,” Heyne says. “It’s a strategic imperative for any company that wants to stay in business.”

“The law should be the low bar—because you want to do what’s right. You have to create policies and programs and documentation that will put you on the right side of the laws you know are coming but aren’t yet here.”

—Ruby Zefo, BS 85

It’s also an education that needs to start much earlier in life, says Dimple Malkani, BS 98. She founded Glow Up Tech to prepare teenage girls for the tech industry by introducing them to successful female leaders in Silicon Valley. The skills necessary to succeed in a gen AI world aren’t necessarily those emphasized previously in tech, or even in previous iterations of AI, says Malkani, who spent decades working in marketing and business development. “The core skills these girls should be getting when they go to college aren’t just data science but strategy and creativity as well—to figure out what new product innovation we should create,” she says.

One thing she’s sure of as she talks with the next generation about gen AI is that, unlike current workers, they are ready to dive in. “Gen Z is very comfortable using gen AI,” she says. “In fact, they’ve already embraced it and expect it to be part of their working futures.”

Nick Sonnenberg, MFE 07
CEO and Founder, Leverage

Nick Sonnenberg, MFE 07.If you’ve ever felt overwhelmed with work, you’re not alone. Nick Sonnenberg heard the complaint so often that he wrote a book to solve the problem: Come Up for Air: How Teams Can Leverage Systems and Tools to Stop Drowning in Work.

It provides a framework for eliminating unnecessary tasks and focusing instead on work that drives results. Along with his operational efficiency platform, Leverage, Sonnenberg is reinventing the way people get things done.

Before he became an efficiency expert, Sonnenberg was barely staying afloat himself. He’d originally started a freelancer marketplace called Leverage that scaled very quickly. Then, his business partner walked out, jeopardizing the company’s future. Sonnenberg soldiered on, quickly noticing how much inefficiency there was, specifically in three areas: communication, planning, and resources.

“To have any chance of saving the company, I needed to get some time back,” he says. “Focusing on those buckets, things started turning around.”

Soon, people began contacting him for organizational advice. Eventually, he pivoted the company to become an efficiency training firm.

Sonnenberg says his success with Leverage wasn’t a case of getting lucky when his back was against the wall. He credits his MFE training and his years as a high-frequency trader, where he learned every second matters.

“Being a financial engineer, I’m programmed to find pattern recognition,” Sonnenberg says. “I started connecting the dots that there was this big opportunity to help a lot of people hopefully save millions of hours by teaching best practices of how to leverage all these amazing systems and tools, like Slack and Asana.”

comeupforair.com

Keeping Company

Know your gig workers to retain them

An Uber driver wearing a face mask and hat.When done right, the gig economy can mutually benefit companies and workers. Companies can tap into deep and vast labor pools, and workers can create their own schedules. But such flexibility challenges gig platforms in committing to a service capacity. What incentives, then, can entice workers to work more hours more often?

A recent study co-authored by Assistant Professor Park Sinchaisri and published in Manufacturing & Service Operations Management sought to answer that question.

The researchers utilized data from a U.S.-based ride-hailing company that included 358 days of driving activities and financial incentives for thousands of New York City drivers between 2016 and 2017. Perhaps not surprisingly, they found that drivers work toward their income goals and are less likely to work after meeting them.

More surprisingly, Sinchaisri found that workers who have previously worked longer shifts are more likely to start a new shift or work longer than drivers who have worked less. This finding goes against previous research on taxi drivers, who have more of a “time-targeting behavior.”

Sinchaisri says that gig platforms should ask what specific goals workers have and make targeted adjustments. “Once you know your workers’ goals, you can think of better ways to incentivize them,” he says.

There’s No Place Like Work

How place identity enhances engagement

Illustration of man sitting at work desk with sunlight being painted upon him.Post-pandemic workspaces have become increasingly fluid, and companies are trying out hot desks and hoteling spaces as they struggle to entice workers back to the office. But new research suggests that leaders wanting to build employee engagement should think less about rearranging the furniture and more about how employees relate the office space to their own work.

“When people feel a sense of self-esteem and distinctiveness derived from their workspace, we found it enhances their engagement,” says professional faculty member Brandi Pearce. “It also increases collaboration and their commitment to the organization.”

Pearce and colleagues from Stanford and Pepperdine universities studied “place identity,” as they refer to this sense of connection, at a software company transitioning workers at sites worldwide from traditional offices to open-plan innovation centers.

The research, published in Organizational Dynamics, found that whether people accepted or rejected the innovation centers didn’t align with their work functions or professional backgrounds, nor with age, gender, location, or other factors. “What seemed to matter more than the space itself was how people felt the space connected to them personally, positively differentiated them, and reflected a sense of belonging to something meaningful to them,” Pearce says.

“When people feel a sense of self-esteem and distinctiveness derived from their workspace…it enhances their engagement.”

What’s more, workers with a distinctive sense of place identity collaborated more actively with one another and were more engaged and committed to the organization.

So how can leaders cultivate place identity? Whether the setting is physical, hybrid, or virtual, Pearce suggests three best practices:

Broadcast the vision.

No matter the setup, leaders should clearly communicate the purpose of the space and what kinds of work are best done in the various workplaces: brainstorming sessions, workshops, and other collaborative tasks in work offices, for example, and focused time in home offices. To help define virtual workspaces, leaders can state whether video conferences are meant for efficiency or connection.

Model Enthusiasm.

Equally critical to visioning is the way leaders convey a positive attitude about space. In a hybrid setting, leaders can express enthusiasm by holding in-person meetings on in-office days and visibly blocking calendar time during remote-work days for solitary work.

Empower employees.

The researchers found place identity was highest when employees were encouraged to tailor their spaces to suit their needs and preferences. In one location, for example, employees were given resources to co-create furniture and other artifacts, enhancing their personal connection to the office. Remote workers could be given materials to customize their home spaces to create a connection to their team or organization, or—if they do visit the office—to create something with co-workers to bring home.

Assoc. Professor Andrew Shogan

Operations research expert

Headshot of the late Associate Professor Andrew W. Shogan.Associate Professor Andrew W. Shogan, 74, an expert in operations research, died on May 30 in Orinda, Calif. Shogan joined Berkeley Haas in 1974 and spent his entire professional career at the school until his retirement in 2007. He was beloved by faculty and staff alike. His passion for teaching and advancing the use of mathematical models to formulate and solve problems arising in business, industry, and government earned him several teaching awards, including the Earl F. Cheit Award for Excellence in Teaching—the highest honor given to faculty by students.

In addition to teaching, Shogan served as associate dean for instruction for 16 years from 1991 to 2007, where he oversaw the growth of all six degree programs and introduced innovations to the MBA program, including the creation of a shared virtual classroom with MBA students from Haas, Darden, and Michigan Ross. In recognition for his contributions to Haas, Shogan earned the Chancellor’s Distinguished Service Award in 2007.

Donations in his memory may be made to the Haas School of Business Undergraduate Program. Visit our giving site and note “in honor of Andrew Shogan.” Read his full obituary.

IN MEMORIAM

Herbert Ems, BS 47
Terry Haws, BS 49
Robert Hake, BS 50
Robert Elder, BS 51
Frank Corona, BS 53
Edward De Matei, BS 56
Beryl Robinson, BS 57
Richard Emerson, BS 59
Will Gassett, BS 60
Robert Buchman, BS 61
Robert Hermanson, BS 63, MBA 65
Barbara Tosse, BS 65
John McCue, MBA 69
Robert Hickey, MBA 77
J.P. Sheehan, MBA 78
Julie Leuvrey, MBA 88
Stockton Rush III, MBA 89
Leonora A. Burke, PhD 90
Susan Kobayashi, MBA 92
Candace Bennyi, BS 94
Beidi Zheng, MBA 08
Daniel Potter, MBA 19
Victor Garlin, Faculty
June A. Cheit, Friend

Study: In the gig economy, getting to know workers is key to keeping them

Asian woman car sharing driver wearing face mask checking mobile phone searching for job destination
Photo: Gahsoon for iStock/Getty Images

When done right, the gig economy can mutually benefit companies and workers. Companies can tap into deep and vast labor pools. And workers can create their own schedules, enjoying flexibility and working as much—or as little—as they want. But that can also create some headaches. What if workers collectively only want to work during specific times? What incentives can get them to work more hours more often?

A recent study coauthored by Berkeley Haas Assistant Professor Park Sinchaisri set out to answer these questions. The study, published in the journal Manufacturing & Service Operations Management, concludes that financial incentives increase both the frequency and duration that people work. The study also finds workers will work less after reaching a daily or weekly financial goal, but those that start work earlier in the day are more likely to work beyond the time required for their financial goal.

Robust data set includes nearly a year of ride-hailing data

Sinchaisri and his fellow researchers—Gad Allon of the University of Pennsylvania’s Wharton School and Maxime Cohen of McGill University—utilized a comprehensive data set from a U.S.-based on-demand ride-hailing company. The data included 358 days’ worth of driving activities and financial incentives for drivers in New York City between October 2016 and September 2017.

The data included thousands of drivers and millions of work shifts, as well as each driver’s vehicle type, experience with the platform, number of hours driven, and financial incentives offered and earned. “The key advantage of our data is that we observe the incentives that were offered to every driver regardless of the decision to drive. In other words, even for drivers who decided not to drive for a particular time period, we still know their offered wage and promotions for that period,” the authors wrote.

Drivers demonstrate an ‘inertia’ when it comes to working hours

Perhaps not surprisingly, the researchers found that drivers have an ‘income-targeting behavior’ easily tracked in most apps that give real-time earnings reports. That is, drivers will work toward their income goals but are less likely to work once they meet them.

More surprisingly, Sinchaisri and his team found an ‘inertia’ regarding working hours. Workers who have previously worked longer shifts are more likely to start a new shift or work longer than drivers who have worked less. The finding goes against previous research on taxi drivers, who have more of a “time-targeting behavior.”

“This difference could be driven by the unique flexibility of gig work,” the paper says, suggesting that inertia could represent drivers’ strategic behavior.

“Inertia is why it is so hard to bring gig workers back after the pandemic—they currently aren’t used to working day after day, so it’s a matter of attempting a cold start,” Sinchaisri said. “However, there is a flip side. Once these workers go back to working, they are much more likely to continue working.”

Get to know your gig economy workers

Sinchaisri says getting to know your workers better can help create more specific targeted incentives. Companies should ask gig economy workers what specific goals they have to understand their workers more and make adjustments based on that feedback, Sinchaisri advises. “Once you know your workers’ goals, you can think of better ways to incentivize them,” Sinchaisri points out.

Or, as the paper states, “Targeting specific workers with different incentives can be beneficial. We examine how the platform can improve its operational performance by offering personalized incentives based on workers’ attributes.”

Incentives are good, but not everything

While incentives are good, and Sinchaisri says gig companies should pay their workers as much as possible, good pay and incentives alone are not everything. And in today’s climate, where gig workers have plenty of options, companies should do what they can to continually attract and retain workers.

“The recent rise of the gig economy has changed the way people think about employment,” the paper states. “Unlike traditional employees who work under a fixed schedule, gig economy workers are free to choose their own schedule and platform to provide service. Such flexibility poses a great challenge to gig platforms in terms of planning and committing to a service capacity. It also poses a challenge to policymakers who are concerned about protecting workers.”

As this research suggests, that also goes into considering behavioral factors.

“We find that financial incentives have a positive effect on the decision to work and on the work duration, confirming the positive income elasticity from the standard income effect,” the paper concludes. “We also observe the influence of behavioral factors through the accumulated earnings and number of hours previously worked. The dominating effect, inertia, suggests that the longer workers have been working so far, the more likely they will continue working and the longer duration they will work for.”

Sinchaisri noted that these insights are not specific to the particular platform they studied, and could apply to other types of gig-work platforms. “Many gig platforms do offer more certainty in pay,” he said. “This could be one way to improve the retention and the motivation of the workers.

Read the full paper:

The Impact of Behavioral and Economic Drivers on Gig Economy Workers
By Gad Allon, Maxime C. Cohen, and Wichinpong Park Sinchaisri
Manufacturing & Service Operations Management
March 2023

Work in Progress

Haas experts on what to expect in the ever-evolving arena of work.

Change has long been coming for the world of work.

Automation and artificial intelligence technologies have been on the horizon or among us in their rudimentary forms for years—we’ve grown used to customer service conversations with chatbots, for example. Online hiring platforms (such as Upwork for freelance gig workers) have been complementing more traditional approaches to hiring for roughly a decade. And even pre-pandemic, the proportion of remote-capable U.S. workers in fully remote arrangements was inching up slowly, by 2019 climbing to 8%, according to Gallup.

COVID-19 kicked these slowly evolving trends into a turbo-charged rate of dizzying change. By mid-2022, nearly a third of remote-capable U.S. workers were fully remote. A survey by Upwork found that 53% of businesses said the pandemic increased their willingness to hire freelance gig workers. And the pandemic-induced imperatives to social distance and to adapt to fluctuations in demand spurred new investment in and utilization of automation technologies. 

Several Haas thought leaders are focusing their research on the questions many of us are asking ourselves as we reel from the rapid changes imposed on our work lives and work identities. For whom is the shift to remote work a net-positive change, and for whom is it a detriment? In which situations might these newly pervasive work arrangements be narrowing inequalities among workers—and where are they creating new ones? What does the newest research suggest about the likelihood that cutting-edge AI tools will render obsolete whole sectors of workers? And, perhaps most importantly, how do we define “good jobs”—and how can we, as a society, ensure that they don’t go extinct? 

The Haas thought leaders featured here don’t have all the answers, but they do have research-backed predictions, policy recommendations, and reasons for both concern and optimism as we chart our way through the end of work as we knew it—and orient ourselves in the world of work that’s emerging in its place. 

The remote future

Man floating on an island in a water cooler.Assistant Professor David Holtz signed the papers that made official his doctoral research internship at Microsoft in March 2020—timing that would prove portentous. He’d been invited into the technology firm’s Redmond, Washington, offices to study online marketplaces. Soon, however, Holtz found himself working not from Microsoft’s campus but remotely from his East Coast apartment—and on an altogether different research question: How was the swift decampment to remote work affecting communication and collaboration within Microsoft? 

Before the pandemic descended, 18% of Microsoft’s U.S. employees worked remotely. By April 2020, the firm had instituted a mandatory work-from-home policy for all of its non-essential U.S. employees.

To investigate how remote work reshaped communication practices among Microsoft’s more than 60,000 U.S.-based employees, Holtz and his co-authors analyzed anonymized data summarizing individual workers’ time spent in meetings and on calls, the number of emails and instant messages they sent, the length of their workweeks, and the patterns of their collaboration networks. 

Their data covered December 2019 to June 2020—so, the several months before and after the firm-wide work-from-home policy took effect. Access to this before-and-after data was important, Holtz emphasizes, because it allowed the research team to compare the working patterns of those 18% of employees who’d been remote pre-pandemic to the patterns of those who shifted to remote work because of COVID-19.

“We took really seriously the matter of trying to separate the effects of remote work from the effects of the pandemic,” Holtz explains. Already in 2020, many were speculating that a wholesale return to offices might never happen. “We wanted to understand, if that were the case, what would the effects of remote work be once the pandemic had subsided?”

Overall, the picture of remote work that emerged in their findings was not one of an arrangement particularly conducive to innovation. One of their main findings was that working remotely was associated with a decrease in the number of (and amount of interactions with) a person’s “weak ties”—that is, those colleagues with whom you don’t work directly but with whom a casual interaction can prove helpful or illuminating in surprising ways.

“There’s all this research that shows weak ties to be really important for the diffusion of new ideas and the propagation of information through an organization,” Holtz says. 

Relatedly, they found that the rate of change within employees’ networks fell considerably when working remotely. “The network kind of ossifies and starts to freeze in place,” Holtz says. “Research shows that creativity is associated with fresh teams, working with new folks.” 

Inequalities, old and new

A three-person shell with a one-person shell on either side, with all people rowing in sync.Research from Associate Professor Aruna Ranganathan adds a more positive dimension to this picture of remote work’s effects—especially when it comes to creativity. For some individuals within an organization, her research suggests, the adoption of a remote setup may actually act as a booster shot for creativity and performance.

Ranganathan has always been a scholar of work and employment, with a particular focus on individual-worker outcomes. “I’m interested in understanding how remote work perhaps exacerbates some preexisting inequalities, creates new forms of inequality, and also has the potential to mitigate some inequalities that existed in more traditional forms of work,” she says. She points to research indicating that women have long been held back from performing to their full potential at work, given that they experience more interruptions in team discussions and generally face lower performance expectations. Of course, this previous research has presumed a traditional synchronous team environment (imagine employees chatting in real time around a conference table). But as many of the world’s workers moved remote during the pandemic, asynchronous collaboration (via email, say) shot up.

For some individuals within an organization … the adoption of a remote setup may actually act as a booster shot for creativity and performance.

In one project, Ranganathan and her co-author studied folk music ensembles (consisting of a singer and a few instrumentalists) in eastern India performing traditional songs, each having many versions and interpretations largely determined by the singers. In these groups, gender roles are prescribed. When women are members, they typically only sing. Because each ensemble member has a distinct role, songs can be recorded either live as a group or solo and later combined digitally.

Ranganathan and her co-author found that working alone afforded women singers greater freedom of creative expression than when working within a group of men in a more traditional synchronous environment. But tempting as it may be to conclude that remote work is the magic salve in addressing the problem of unequal treatment in the workplace, Ranganathan cautions against it. 

“If we just embrace remote work, we’re not solving the root problem, which is that when these teams come together, certain members are not making other members feel included,” she says. “Embracing remote work shouldn’t mean we don’t try to continue also reshaping the synchronous work environment to be more inclusive of women and other minorities in the workplace.”

Protecting workers

Distinguished Professor Laura Tyson, who has written extensively about the future of work, technology, and trade policy (and who formerly served as an advisor to President Clinton), looks further ahead as she considers the disrupting force of technology on workers. In particular, her books, columns, and papers have lately focused on automation and AI and whether “good jobs” will be able to proliferate as these technologies—which perform tasks more cheaply, faster, and often better than humans—assert greater influence. 

Her prognosis is not hopeful for those blue-collar jobs one might reasonably consider “good”—that is, those that offer middle-class incomes, safe conditions, legal protections, career advancement, and benefits. She’s written that she fears any economic growth spurred by advancing AI technologies will not be widely shared and will further fuel economic inequality. 

“AI will automate many tasks, change existing tasks, and create new ones. There will be both winners and losers in this process,” she wrote in her recent article, “Automation, AI & Work.” 

Firms investing more in AI were also the ones with higher employment rates. … When firms adopt AI technologies, it actually creates the need for new types of human expertise.

Tyson says diverse communities across the U.S. will need to devise “tailored strategies” to meet the changes wrought by technologies still on the horizon. For instance, she points to the need for more affordable housing in major cities and better digital infrastructure to support remote work in rural areas. “All communities can expect to face challenges relating to workforce redeployment and mobility, skills and training, economic development and job creation, and support for those undergoing occupational transitions triggered by automation.”

A string of connected paper people with one of the people being a robot.

Research from Assistant Professor Anastassia Fedyk offers some optimism on how AI could enhance employment in some areas. In one study, Fedyk found that the firms investing more in AI were also the ones with higher employment rates. She says this suggests that when firms adopt AI technologies, it actually creates the need for new types of human expertise.

“What the data show is that the main effect of AI in most industries is not replacing human labor,” she says. “Instead, AI is allowing firms to innovate and grow.” 

Where her findings align with Tyson’s concern is on the topic of inequality. Fedyk’s other research has found that firms investing more in AI go on to hire more-educated workforces. “It seems that firm investments in AI are conducive to greater demand for college-educated and technical workers,” she says. “These findings suggest that it’s important to invest in upskilling the workforce as firms adopt new technologies such as AI.” 

Investing in education

Haas Dean Ann Harrison places similar emphasis on the importance of schooling and worker training in countering inequality as the nature of work transforms. One of the leading scholars in trade and development economics, Harrison points to educational programs as one of the most important public provisions in countering inequality. 

“A school like UC Berkeley, and all of California’s public universities and community colleges, play a key role in leveling the playing field,” Harrison says. “But we could do even more. What scares me most is that significantly less than half of our young people get a four-year college degree—we need to change that by increasing public educational opportunities and scholarships.” 

It’s not too late for policies written by humans, for humans, to help determine the ways in which rapidly advancing technologies will shape workers’ lives.

Harrison also says increased social protections need to be considered as well as incentivizing firms and innovators to grow in ways that employ more of America’s labor force. “In other words,” she says, “we need to encourage labor-using innovation and entrepreneurship.”

Indeed, worker training is one of the policy interventions Tyson emphasizes, too. She also recommends tax policy reforms to lower payroll and other payroll-related taxes, increasing social benefits and protections for gig workers, and introducing measures to enshrine workers’ ability to collectively bargain and unionize. 

The upshot? Harrison and Tyson agree that it’s not too late for policies written by humans, for humans, to help determine the ways in which rapidly advancing technologies will shape workers’ lives.

“How the benefits of automation are shared among workers from a diverse array of backgrounds is not technologically predetermined,” Tyson has written. “It is entirely up to us.” 

In fact, it’s even possible that AI tools can help clarify some of the most human parts of our work lives—such as allowing for a deeper understanding of how people are experiencing organizational life. Haas professors Jennifer Chatman and Sameer Srivastava developed a machine-learning method to integrate data from an employee survey with eight years of emails from a mid-size technology firm, giving a longitudinal look at “culture fit”—and how it’s formed and maintained within workplaces. 

The results gave them valuable insight into what draws people to certain workplaces and sours them on others.  “We found that two types of culture fit really mattered,” Chatman says. One type was “value congruence”—that is, the alignment between an employee’s personal values and the dominant ones within their organization—and the other was “perceptual congruence”—the extent to which an employee accurately perceives a workplace’s culture and behaves in accordance with it. These two distinct types of culture fit impacted different outcomes at work: Value congruence predicted how long people stayed with the organization, and perceptual congruence was closely tied to workers’ performance success.  

As the workplace and the nature of work itself continue to transform in ways both predictable and less so, understanding workers’ motivations and the drivers of their performance will only grow more important. After all, for the time being at least, humans are still the most complicated machines keeping the world of work running.

Dean’s Speaker Series: Reddit COO Jen Wong on her leadership journey

Growing up as a shy introvert, Reddit COO Jen Wong said she never saw herself as a leader.

“I think I assumed a leader was a person who told other people what to do,” Wong said.

It was her fascination with companies and the people who lead them, as well as a drive to solve new problems, that led her to pursue a career that has included leadership positions at Time, Inc.; PopSugar; AOL, and now Reddit.

“I’m a puzzler at heart, and when my mind starts searching for a new problem to solve, and there’s something I can learn, that propels me forward,” Wong said. “I always want to move into something that has a clear lane for me to have an impact.”

Wong, who topped Reddit’s Queer 50 list this year, shared her leadership journey with MBA students and the Haas community at a Dean’s Speaker Series talk on Sept. 21. The talk was co-sponsored by Q@Haas as part of Coming Out Week, September 18-22.

As Reddit’s Chief Operation Officer, Wong oversees business strategy and related teams.  Only four years into her tenure as COO, she has helped lead the growth of Reddit into a profitable business by scaling ad revenue to well over $100 million.  Her leadership goes beyond growing the business; she is also passionate about Reddit’s company goal that’s just as important as revenue: diversity and inclusion. In addition, Jen is viewed as an expert in the digital landscape.

Watch the full talk:


Start at