top of page

Generative AI - Boards Charting the Course for Responsible Adoption



Generative AI represents one of the most transformative technologies to emerge in the digital age. From creative tools like DALL-E that can generate images from text to natural language systems like ChatGPT that can write prose on demand, businesses now have incredible powers of automation at their disposal. However, there is also tremendous complexity around deploying these in impactful yet responsible ways. Based on McKinsey’s recommendations, boards should center their inquiry and oversight into generative AI strategies around four key pillars - potential applications, market impacts, risk mitigation, and capability building.


Expanding the Realm of Possibilities


The first critical question boards should probe leadership teams on is identifying potential use cases across all functional areas. Applications in marketing are fairly intuitive - automating visual media creation and generating customized content at scale are just two examples. However, McKinsey stresses using creative brainstorming techniques across other departments to uncover novel ways to leverage these tools. What are ways AI could optimize supply chain flows, unlock insights within internal datasets, or screen potential pharmaceutical candidates? Having cross-functional teams share perspectives and build on ideas unencumbered is key to mapping highly-tailored implementations aligned to strategic priorities. Resist the temptation to narrowly typecast or circumscribe these powerful technologies until a diverse set of opportunities have been explored. While certain applications may require heavier governance, boards play an essential role in fueling out-of-the-box thinking around leveraging generative AI capabilities.


Bracing for Market Disruption


As generative AI grows more advanced and accessible, virtually every sector will experience emerging competitive threats. What is the current state of R&D innovation that could be accelerated through synthetic data or simulations? How might startup entrants deploy these tools to upend prevailing business models? Board members should scrutinize common assumptions around market dynamics, resources, and competencies that could fundamentally shift as generative AI propagates across industries. The businesses best positioned for disruption will be obsessively monitoring advances adjacent to their sectors while challenging internal teams to brainstorm disruptive use cases themselves aimed at capturing value and competitive differentiation before rivals replicate. However, while pushing leadership teams to consider game changing scenarios, boards also need reassurance that continuity planning for less dramatic scenarios isn’t being neglected. Preserving organizational versatility to pivot across a spectrum of market futures gives companies the plasticity needed to adapt.


Building Ethical and Responsive Guardrails


The rapid pace of advancement with generative AI warrants implementing appropriate governance to ensure systems are developed and utilized responsibly. Boards need to probe leaders on key aspects of risk and trust. How are personnel trained to recognise biases or misinformation within generative outputs? What validation processes have been instituted to screen content across security, privacy, legal, ethical dimensions before dissemination or use in sensitive applications? Who oversees establishing parameters aligned to data sensitivity tiers? What monitoring regimes observe system behavior across training cycles to identify drift? Is there adequate version control and documentation as foundations evolve precipitously? If guardrails fail to prevent misconduct or harm, how are investigations, remediations, and policy improvements handled? Instilling public and regulatory confidence in generative AI demands having not just technical skills, but also institutional processes and leadership commitment towards accountability. Boards play a pivotal signalling role in emphasizing principled growth rooted in ethics and organisational values.


Building Organizational Readiness


Realising the dynamic potential of generative AI ultimately hinges on assembling robust technical building blocks and talent. Boards need to probe the adequacy of data infrastructure, security precautions, and technical staff. Are foundational elements like data cataloguing, version control, active storage management mature enough for volatility and scale? Have investments been made into confidential computing capabilities that keep highly sensitive data shielded even during intense computational workloads? Are recruiting and reskilling efforts in motion to onboard more ML engineers, data specialists, software developers to mesh with distinctive automation requirements? And finally, is executive leadership adequately informed themselves on the intricacies of responsible and value generating AI implementation to provide informed oversight across the numerous functional domains AI will ultimately permeate? Constructing organizational readiness to assimilate generative AI both thoughtfully and aggressively gives companies pole position.


Unlocking the business potential of generative AI while constructing ethical system guardrails demands engaged and informed leadership. Boards that take a proactive approach to questioning and guiding executives on this emerging technology stand the best chance of capturing its benefits while avoiding pitfalls. In particular, boards should pressure test AI ambitions against organizational readiness, risk mitigation maturity, market disruption planning, and creative application brainstorming. The innovation/governance balance organizations strike today in architecting policies, talent programs, and infrastructure will drive their generative AI capabilities and competitiveness for years ahead as advancements continue rapidly accumulating. Given the high stakes around this increasingly pervasive technology, boards need to hold leaders accountable on demonstrating capabilities to assimilate generative AI both rapidly and responsibly.

Comments


bottom of page