296235012 | Dzmitry Zelianeuski | Dreamstime
Dreamstime M 296235012

Security implications of hastily implemented AI and understanding what to do

Nov. 10, 2023
The tech world and industrial IT and OT are abuzz over the possibilities of generative AI, which is nascent and still coming into use and thus must be approached with caution for its applications in digital transformation.

Generative AI is a hot topic right now, so it’s no surprise that some cyber solutions coming to market have been thrown together hastily. What should be done to build today so that your solutions stand the test of time as large language models (LLMs) for AI, such as ChatGPT, Bard, and more, mature rapidly?

OpenAI unveiled ChatGPT on Nov. 30, 2022, creating a buzz in the tech world. This chatbot, powered by the GPT-3 (short for General Pretrained Transformers) model, quickly made history by becoming the fastest-growing consumer application ever, amassing 100 million active users in just two months. To put this into perspective, Facebook (now Meta) took four and half years, Instagram two and half years, and TikTok nine months to reach a comparable following.

See also: Gen-AI leads back to reducing downtime on the line

What's intriguing is that the technology underpinning ChatGPT isn’t entirely novel. It all stems from the transformer model architecture, which found its roots in a 2017 Google research paper titled “Attention Is All You Need.” As of late May, this paper has been cited more than 75,000 times and is often considered the wellspring of generative AI innovation.

The first GPT model from OpenAI and Alphabet's BERT language models hit the scene in 2018. Amazon, Meta, IBM, Alibaba, and Tencent later introduced their extensive language models.

These deep learning algorithms excel at analyzing vast text data, enabling them to generate coherent responses that are contextually relevant. The keyword here is “coherent.” AI has a knack for approximating the most likely outcomes based on the vast training data it has ingested.

Defining what AI is and what’s not

It’s essential to distinguish between AI and machine learning (ML). While machine learning focuses on teaching a neural network to approximate the next outcome based on historical data, AI has the capacity to consider a much broader set of correlations. The current use cases for AI are somewhat limited. Think of “deep” in the context of a surveyor—it’s like possessing deep and specialized knowledge about the type of land a building is to be constructed on. In contrast, an architect must take that knowledge and correlate it with engineering, design, aesthetics, and sustainability, making it a broader correlation.

See also: Cybersecurity stakeholders praise AI executive order—but say it’s just a start

AI needs to be broad, as it can consider a more extensive range of parameters. However, AI-generated outcomes should not be taken as gospel. Currently, AI’s most common applications involve creative writing, generating prompts for creative content, aiding in digital assistance, enhancing translation and communication, and even generating humorous responses. There are niche applications as well, such as GitHub's Co-pilot, which transforms natural language prompts into coding suggestions. Although termed as an AI application, it leans more toward machine learning.

In a relatively short time, these large language models have piqued the interest of nearly every industry. While there might be concrete use cases that emerge after a period of intense experimentation, there have been early instances of errors, inaccuracies, and inherent biases in these algorithms that have sowed doubt.

IoT in the AI context

The initial debate around accountability in AI has given rise to discussions about safe and secure AI. In July, the UK’s chief security expert cautioned that as companies race to develop new AI products, security may be getting overlooked.

AI thrives on data, leading to new demands for data collection, curation, and use. While some AI systems continue to rely on access to extensive data sets, focusing solely on volume isn’t the only priority for schemes promoting access. Some of the next-gen AI technologies are designed to work with data that has verified authenticity and end-to-end secure data. This is where the hyperconnected world of IoT devices and the resulting data may become a crucial tool for training and improving AI recommendations and actions.

See also: AI coupled with 5G improves industrial network operations

Today, IoT applications of ChatGPT are in their infancy, prompting questions like, “Would you trust ChatGPT to control your smart home lights?” So, how can ChatGPT enhance IoT security? Here are some early use cases:

  • NLP for authentication: ChatGPT can assist in implementing secure authentication mechanisms by analyzing and interpreting natural language inputs. It can verify user identity based on specific prompts or responses, helping prevent unauthorized access to IoT devices or systems.
  • Anomaly detection: AI can be trained to recognize patterns and behaviors associated with normal IoT device operations. By monitoring device-generated data and analyzing it using the language model, it can identify unusual or suspicious activities, indicating a potential security breach. Early detection and timely response to security threats may become possible.
  • Threat intelligence and alerting: ChatGPT can be integrated with security systems to provide real-time threat intelligence. It can identify and interpret potential security threats by analyzing security logs, sensor data, or network traffic. AI can take it further by predicting and mitigating scenarios and generating alerts, notifications, or recommendations to enhance IoT security.
  • Security policy enforcement: ChatGPT can assist in enforcing security policies within an IoT environment. It can interpret policy rules and guidelines, validate user inputs or commands against those policies, and provide real-time feedback or warnings if any security violations are detected. This ensures that IoT devices and systems adhere to defined security standards.

Onward from here

It’s important to note that these are still early-stage endeavors, and as the capabilities of AI language models evolve rapidly, these capabilities are yet to be thoroughly tested at scale. There are numerous unresolved issues at the core of generative AI, so it's crucial to maintain a sense of realism and not expect these solutions to be flawless or to solve all problems suddenly.

Generative AI in secure IoT is a double-edged sword, presenting both opportunities and risks. While it’s not a silver bullet, it’s a significant tool in our arsenal.

About the Author

Bijal "Bee" Hayes-Thakore

Bijal "Bee" Hayes-Thakore is VP of marketing at Kigen. An engineer turned marketer, she cut her teeth in IoT through the first network designed for the Moon and writing algorithms for autonomous robotic fleets. She has for the last decade focused on how businesses and communities can succeed in their transformation with adoption of IoT, connectivity, and secure data.