Stephen Thomas on the Balance between AI Advancement and Data Security

Stephen Thomas on the Balance between AI Advancement and Data Security

In a recent conversation with CanadianSME Small Business Magazine, Stephen Thomas, Executive Director (Analytics & AI Ecosystems) and Distinguished Professor of Management Analytics at Queen’s University’s Smith School of Business, delved into the complexities of integrating generative AI tools in businesses. He addressed companies’ concerns over data security that are leading some to ban such tools, like ChatGPT, from their workplaces. Stephen also provided insightful guidance on striking a balance between embracing generative AI tools and exercising caution. He identified three key elements crucial for the successful adoption and implementation of these tools and elaborated on their importance. Further, he offered solutions for addressing data security and privacy concerns, emphasizing the need for appropriate safeguards. Lastly, Stephen offered recommendations and key principles for leaders considering the integration of generative AI tools into their organizations.

Dr. Stephen W. Thomas is an adjunct faculty member at Smith School of Business at Queen’s University in Kingston, ON, Canada. He is also the Executive Director of Smith’s Analytics & AI Ecosystem. He was named the Professor of the Year in the MMA program in 2017 and 2018.

Dr. Thomas holds PhD, MSc, and BSc degrees in Computer Science. His main interests are databases, data analytics, and natural language processing. His research has been published in IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Software Engineering, Empirical Software Engineering, and others. He is a recipient of the Scotiabank Scholar research grant.


Many companies have decided to ban generative AI tools from their workplaces due to concerns over data security. Why do you think some companies are taking this approach? What are the main concerns they have regarding generative AI tools like ChatGPT?

The primary reason companies have banned generative AI tools like ChatGPT is a concern over information safety and security. Companies are concerned that employees will include confidential or sensitive data or code in their questions. Since the questions are sent to OpenAI’s cloud servers and stored forever, and because OpenAI owns the questions and can use them however it pleases, companies risk losing secrets and breaching privacy regulations. For example, OpenAI might use previous questions to train the next version of ChatGPT, at which point users can obtain a company’s sensitive data by prompting ChatGPT the right way. In addition, OpenAI could sell the questions — sensitive data and all — to the highest bidder.

Another reason is the well-known hallucination issue. The text that ChatGPT generates is sometimes factually incorrect but stated in a confident and persuasive tone. Companies that use ChatGPT in any customer-facing application risk embarrassment, liability, and worse. Years of research are still needed before ChatGPT can become mission-critical for any systems that must be “correct.” 

Finally, many companies have banned ChatGPT due to OpenAI’s lack of transparency. OpenAI has not shared any details on how ChatGPT was trained, what data sources were used, what the internal architecture is, what manual guardrails have been installed, and what testing has been performed. Without these details, many organizations struggle to justify its use in business-critical scenarios.


Balancing the embrace of generative AI tools with the exercise of caution is crucial. Could you explain how business leaders can strike this balance effectively? What are the key considerations they should keep in mind?

First, businesses must strongly commit to testing AI tools before deploying them. Second, businesses must be realistic about the downsides of these tools, exciting and novel as they are, and avoid deploying them in inappropriate scenarios. Third, businesses must invest in educating their leaders on how the technology works, what the best use cases are (and aren’t), and how to most effectively use these tools.


In your opinion, what are the three key elements that business leaders should focus on to successfully adopt and implement generative AI tools like ChatGPT in their organizations? Could you elaborate on each of these elements and why they are important?

In short: education, guidelines, and communication.

  • Organizations should invest heavily in training and education to build foundational understanding and capacity. Venues to consider include short online courses, week-long workshops, and all the way up to full master’s degrees in some cases. Training should occur at all levels of the organization: front-line workers, middle managers, and senior leadership. The better this technology is understood, the less chance it will be used incorrectly.
  • Organizations should develop clear internal best practices guidelines to avoid each employee/department figuring these tools out in isolation. These guidelines must be built in an agile fashion with low barriers to change as the technology progresses and use cases vary. The guidelines must include a focus on ethics and responsible use.
  • Organizations should communicate expectations around the use of these tools. For example, organizations should list applications and scenarios where the tools are allowed and disallowed. Organizations should also list which data sources are allowed to be used to fine-tune the AI tools, and which aren’t.

Let’s talk about the concerns surrounding data security and privacy with generative AI tools. How can organizations address these concerns and ensure the appropriate safeguards are in place when using such tools?

Currently, there are only two ways to address the data security and privacy concerns: either don’t use these generative AI tools at all or be sure that no sensitive data is included in any prompts that are sent to these tools.


Finally, do you have any advice or recommendations for business leaders who are considering the adoption of generative AI tools like ChatGPT in their organizations? What key principles should guide their decision-making process and implementation strategy?

Generative AI is in its infancy and changing rapidly. While the desire to unlock value from these tools is enormous, the unknowns and risks are even more significant. Using generative AI at this point for any mission-critical systems is still premature for most organizations. The primary use cases that currently would benefit organizations the most, with the least risk, involve creativity, brainstorming, and exploring “what-if” scenarios, so long as employees are careful with what information they give the generative AI tools.

Share
Tweet
Pin it
Share
Share
Share
Share
Share
Share
Related Posts
Total
0
Share