Since ChatGPT’s release, technology leaders have gone through a generation’s worth of denial, depression and acceptance as they wrestle with how to incorporate AI at their companies. AI services – with ChatGPT as their catalyst – are upending our understanding of how we engineer, how we ideate, how we manage and design products, and even how we communicate with our teams. It’s safe to say that every process we oversee needs to be reconsidered in light of AI. As technology leaders, both our teams and management are looking to us for answers as to where AI will actually be transformative over the long term. Recognizing that I would never be able to single-handedly evaluate every new AI service, I decided to get my engineering team involved in testing use cases, separating AI fact from fiction. The outcome was a team that was more excited about AI, a management team more informed about the potential of AI, and some directional ideas on how AI can assist in development. Here’s a breakdown of what I did.
In my case, I dove in and took a day to use GPT 4.0 to generate the front end code to loosely replicate our core product. I wanted to prove or disprove the idea that GPT can replace engineers (spoiler alert: it can’t). In those few hours of experimenting I learned a ton about the potential and shortcomings of GPT, which I then packaged up and presented to the team.
Recognizing that everyone on my team had different ideas on what AI was capable of, I empowered them to get their hands dirty by actually trying out AI services. I asked everyone on the team to find an AI service and evaluate it. The team was not limited to using engineering and code assist services—in fact, they could use any service that they thought could help in their job. To eliminate any barrier to trying a product, I made all costs related to this AI challenge expensable. (This was important as many existing services don’t offer enterprise accounts yet, so I didn’t want team members forgoing a potential tool because of the hassle of having to pay for it out of pocket). In the end, each team member was tasked with sharing a short, 3-5 minute presentation on their findings. Need help setting up the challenge? DM me or email me at jesse@solidsender.com for some ideas.
For presentations, we used the STAR method to distill our learnings: Situation, Task, Action, Result Situation: Write a brief summary of what you set out to do, and if it was successful or not. Keep the summary high level and clear enough that anyone can understand it Task and Action: Show the “meat” or action of what you did (screenshots / screen-recordings, prompts) Result: Would you try this again, and/or recommend it to others?In our team debrief, results were mixed! Many had found lots of potential in the tools they tried out, but they also discovered dangerous failings. Most importantly, the mystery of AI had been lessened and team members were more interested and more open to talk about it. We’d broken the ice. To continuously learn after the challenge, we set up bi-weekly sessions for our team to report on new findings. We also created a new channel in Slack to keep up to date on real time thoughts and experiments within AI. My goal for the team was to demystify AI and arm all of us with knowledge to safely and effectively utilize it in practice. We succeeded.
Given that anything copied into a tool like ChatGPT could be reused as training data, it’s important to review what guidelines you need around security and privacy before deploying company wide. These will be different at every company and should be approved by management. Simple guardrails may sound like the following: : Employees can copy/paste source code as long as it doesn’t doesn’t include proprietary information. Be smart and use common sense - if you’re not sure, ask your manager. Uploaded code should not exceed X length. Don’t upload any keys! Don’t upload any data (i.e.: something from a database)!. Keep in mind that simply disallowing the use of tools like ChatGPT or Github Copilot could provoke employees to go around your back and use them without any guardrails, so I’d encourage all managers to openly explore AI while establishing safety guidelines.
With so much AI news everywhere, it’s easy to miss important updates. Here are the key newsletters and podcasts I subscribe to:
- Axios AI+ - A great newsletter by the Axios team that covers a range of topics on AI.
- Hardfork - The NYT podcast with Kasey Newton and Kevin Roose isn’t dedicated to AI, but often spends a lot of time on it.
- AlphaSignal - A weekly AI newsletter, with a curated list of top stories.
Like every technology paradigm shift before it, AI can leave technology leaders feeling out of touch and vulnerable. The key to getting back into the driver's seat is to actually start using these new tools so that you and your team can natively understand their potential. Until you do that, you won’t know how impactful - or full of crap - AI can be.
© 2023 Round. All rights reserved | Privacy Policy | Terms of Service