Trewstar Corporate Board Services

Scary vs Useful

January 19, 2024

       

Dear Friends of Trewstar:


 
Wired Magazine published three articles last week:

Get Ready for the Great AI Disappointment

CES 2024 Preview: Get Ready for a ‘Tsunami’ of AI

AI Needs to Be Both Trusted and Trustworthy

 

My conclusion? No one really knows what's going to happen as AI capabilities advance. But since something is definitely going to happen, I want to stay current. Here are the three things I'm doing right now:

  • Reading (actually listening to) Dr. Fei-Fei Li’s memoir The Worlds I See: Curiosity, Exploration, and Discovery At The Dawn of AI. Born in China, she arrived in New Jersey at 15 without enough English to do word problems in math class. Dr. Li’s fascinating story is deepening my understanding of AI amid all the noise.  

  • Taking a course offered by Coursera called Prompt Engineering for ChatGPT. In fact, I’ve asked the entire Trewstar team to complete it. (We recommend listening to the lectures at 1.5x speed!)  

  • Learning from my good friend, board member, and investor in AI companies, Heather Redman. I asked Heather to be a virtual guest speaker today. Here is our Q&A.

 

Beth: AI is regularly described as both very scary and very useful. Which way do you see the scales tipping? 

 
Heather: I see the scales tipping towards useful. The technology is moving rapidly into the hands of engineers and product developers, which means it is getting steadily more helpful to corporations. We see empirical evidence of this in studies (BCG has done several) showing efficiency in coding and other knowledge worker tasks. We are also seeing the early cost saving benefits due to AI in the form of layoffs, for example, in Google’s ads sales division and Duolingo’s termination of contractors who generate content.
 
Even as predictions are pulled forward for when the first artificial general intelligence (advanced AI capable of performing intellectual functions as well or better than humans) will be achieved, both AI experts and the general public have become less alarmist about its hypothetical existential threat to humanity. Some of this is probably due to habituation to the concept of AI as well as fledgling regulatory efforts to limit bias, increase transparency, and protect national competitiveness and security (including cyber security.)
 
Beth: What are examples of useful corporate applications of AI that are being implemented well and improving existing processes?
 
The canonical example is computer coding, where productivity has been increased by some 40% in a variety of enterprises. Corporate departments such as customer support, marketing (particularly the production of content), finance, and research are all starting to benefit from early-stage AI augmentation. Full automation will follow in some cases, but dramatically increased productivity is the real story. A kind of AI called reinforcement learning (reward-driven trial-and-error learning in a dynamic environment) is being used to autonomously operate energy-intensive facilities such as data centers to lower energy usage. Another form of AI, namely computer vision, is being used to replace manual counting of certain types of inventories, supplemented by human supervision for now.
 
Beth: What are examples of corporate applications that are not working well? Why not?
 
Heather: The failures in AI occur primarily for two reasons: data and culture.
 
Job one for companies thinking about AI enablement is collecting all relevant data onto a unified platform. This is sometimes referred to as a data lake. For industrial companies, this means building on top of smart manufacturing capabilities like continuous monitoring via sensors. The addition of AI enables manufacturers to use historical data to train and then ongoing data to continually retrain. Historical and ongoing data can also be used to change how something is done. A good example is how Siemens uses AI to both catch more product defects and continually improve how they are screening for defects.
 
You can also fail on culture. If your workforce sees AI as a threat, your implementation is likely doomed. A friend calls this the quinoa-in-the-robot problem, named after a salad bowl startup that failed because human workers sabotaged their robot colleagues’ ability to make salads by – you guessed it – stuffing them with quinoa. Educating and incentivizing workers to embrace AI’s ability to lighten their workload is paramount. But far from easy because the threat to human jobs is real. Managing these issues has elevated the role of HR to even greater heights.
 
Real gains are already being made through the application of AI, but the race to fully capitalize on its promise is still in the early innings. Sharing the right data in the right ways is already hard. Taking the actions your AI recommends is even harder because AI, properly implemented, should not be constrained by organizational boundaries, and humans often are. Companies able to act on the right AI-generated insights across their divisional silos are likely to be the big winners.
 
Beth: Thank you, Heather! I can see why your boards value your expertise as they build their AI roadmaps. Something tells me that when they write case studies about the first truly AI-enabled organizations, your name will be in them!



As always, we look forward to your thoughts on this topic, including AI stories from your companies and boards.

Best Regards,