Threats Associated with Artificial Intelligence Technologies

Applied artificial intelligence is becoming one of the greatest technological advances of our time. With its great potential, however, there come significant concerns about potential misuse, as well as unintended consequences in its deployment. Therefore, it is necessary to focus on the risks that arise from the use of artificial intelligence.

What is artificial intelligence?

AI (Artificial Intelligence) is a system that enables performing tasks or making decisions that typically require human intelligence. AI as a field of science involves the creation of algorithms or models that allow computer devices to interpret data and learn from them, recognize patterns, adapt to new information or situations, and perform other activities that would be difficult to describe with an exact algorithm.

AI has a wide range of applications such as: automated processing of multimedia content, human-machine interface (e.g. voice control), robotics, cyber security, and many others.

ML (Machine Learning) is a subcategory of AI. ML algorithms and techniques allow computers to learn from data and make decisions without being specifically programmed for such purpose. The field uses mathematical models created on the basis of training data.

The well-known ChatGPT or Bard products are examples of artificial intelligence that use machine learning to understand and generate answers to human queries.

Generative AI is a subtype of artificial intelligence that includes the creation of new textual, visual, audio or even audio-visual content, for example:

  • Visual generative AI
  • Image creation: can create photorealistic images, such as faces, backgrounds or even completely unique objects that do not exist in the real world
  • Making videos
  • Changing the style: editing existing images or videos
  • Text
  • Text generation: language generative models can be trained to generate coherent and contextually relevant text for use in chatbots, content creation or text translation
  • Generating source code, data queries, and text representations of data in various formats,…
  • Audio and audio-visual generation
  • Multi-modal AI: processing and generation of multiple media types by one model (e.g. automatic video subtitling)

Threats

Fake AI products

One of the threats stems from the popularity of AI. The threat is that scammers misuse the brand of a well-known company and its AI product. They create an advertisement or website referring to this brand. Such fraudulent sites will trick you into downloading various files thinking you are downloading a legitimate AI product, but in reality you are downloading malware, putting your data at risk. These fraudulent campaigns often appear on social networking sites like Facebook – people like to join discussions or share the ad itself, and such increasing the reach of scammers.

 

For AI products, we recommend increased attention, checking the history of the page on social networking sites, or reading reviews of the product from other users.

Leakage of information passed to the model

Due to the technological complexity, large language models are typically provided as an online service. The terms of use usually state that data entered by users can be used for further training of the model. Therefore, as with any other third-party online service, entering sensitive information should be avoided. In the past, security researchers were able to obtain such information by making the right queries.

AI-assisted malware creation

Another potential threat is the use of AI to create malicious code, even though today’s language models are not capable of generating complex malicious code. However, in the hands of a skilled attacker, a language model could help to create malicious code more effectively. Language models such as ChatGPT have security measures implemented, which should prevent users from creating malicious code. More experienced users are able to use sophisticated techniques to bypass these security measures and create malware. However, it should be noted that the quality of fraudulent content has not increased since the appearance of language models. This is probably due to the fact that current models require a very precise specification of the task and are unable to generate more than a few paragraphs of source code at a time.

AI-assisted phishing creation

Phishing campaigns may become more effective and authentic in the future thanks to AI, increasing the chances of attackers of getting their victims to perform a certain activity. With the help of AI, attackers are able to create a highly personalized and convincing phishing campaign without grammatical and stylistic errors.

Identity attacks and deepfake

Generative models can create fake texts or quotes in the style of a particular person. Commonly available photo editing software can generate realistic “photos” of situations that never happened. Deepfakes also pose a threat; the scam involves creating highly realistic audio or audio-visual products that try to imitate someone’s face or voice. There are known cases where attackers used deepfake content to blackmail the victim and obtain an amount of money afterwards.

This attack is also possible in real time. The National Cyber Security Centre SK-CERT is aware of deepfake calls being actively carried out through online platforms targeting also companies operating in Slovakia. So far, however, such conversations are recorded only in English, in which the deepfake technology is more advanced.

AI security risks in physical systems

Physical security can also be threatened by artificial intelligence. More and more systems such as autonomous vehicles, production and construction machines or medical devices are using AI. If, for example, there was a security breach in an autonomous car, there could be a significant threat to life or property. It can also be problematic to use AI in cases where life may be directly at risk – for example, an ML model for controlling a pacemaker. As it is difficult to fully understand them, it is also difficult to determine how they would behave in different situations.

Hostile attacks, attacking stickers, manipulation during use

Misuse of graphic models by “attacking stickers”. By a relatively small modification of the image, where a person still recognises the original object, it is possible to convince the software that it is looking at something completely different. Moreover, a person cannot recognize by sight what else the software sees there. It turned out that machine learning not only processes an image differently than the human brain, but this attack is successful regardless of the particular architecture of the model.

“Attacking stickers” that are commonly available on the market can confuse recognition of street signs by today’s cars. So, it is possible that a few small, innocent-looking square stickers on a STOP sign will cause the car to recognize them as the speed limit [1].

This type of attack belongs to a broader family involving any special modification of the input data to confuse the model.

Unexpected, incorrect and harmful results

Despite the name “artificial intelligence,” language models do not actually think. The output is calculated from a large corpus of data from which a probability mathematical model was created. Therefore, any output of such AI can be true, and also a “hallucination of the model“. This is not a flaw in the model but a design feature of the model.

Even if the model has generated data that fully matches the data on which it was trained, that data may also be incorrect or skewed (e.g. pseudoscience, erroneous or even harmful health recommendations, prejudices, or support for destructive behaviour).

Thus, using model results without being able to independently verify that they are correct, can be risky.

Harmful results induced intentionally

Prompt injection, data manipulation, and bypassing built-in limits can produce unexpected or even harmful results. AI models have built-in protection that prevents them from being misused to generate malicious content. However, this protection can often be bypassed with a properly formulated request.

Misuse of AI to violate privacy protection

Privacy protection risks, such as a mass surveillance of people. This topic is currently being discussed at the EU level in the proposed AI Act, which defines 4 levels of risk. The “unacceptable risk” group includes, for example, real-time biometrics, social scoring, or manipulation of vulnerable groups (voice-activated toys that encourage dangerous behaviour).

Recommendations

The National Cyber Security Centre SK-CERT emphasizes the importance of critical thinking when working with artificial intelligence and machine learning.

  • Check if the offer of a product with artificial intelligence is not a scam (non-existing product).
  • Be cautious about the manufacturer’s claims about AI model capabilities. As a rule, marketing in this area greatly overestimates the real capabilities of the technology. For example, claims such as “will catch 100% of attacks” or “completely without human intervention” are usually not based on truth.
  • Do not enter any personal, health, financial, company or other sensitive data into online services using ML.
  • Be vigilant about the digital content you receive and keep in mind that it could potentially be generated by artificial intelligence (phishing sites, deepfake in pre-recorded videos, deepfake in live phone calls).
  • Especially for voice calls via online voice services or traditional phone calls, we recommend the following: if you feel that the person you are communicating with may not be real, either ask them for details that the attacker should not know or ask them to continue the conversation through another communication channel or live.
  • When deploying and using such technologies, pay particular attention to the possibilities of how they could endanger human life and health and take appropriate measures. Remember that the model may fail or be subject to a deliberate attack with spoofed data, which may also be in physical form (e.g. attacking stickers).
  • Check the outputs of the language model independently (other than using another language model). If application of the results could cause any harm, do not use them without checking.
  • If you are deploying your own product based on machine learning and artificial intelligence, remember that attackers may try to exploit it. Identify all the risks that may arise from such activity (e.g. a reputational risk if the deployed chatbot provides false or harmful information, a risk of physical harm to the customer if the robotic system fails, and so on).
  • When deploying and using AI products, keep in mind compliance with legal standards and any other regulations.
  • Search for trusted sources of information about AI safety and regularly monitor the developments in this field.

With the increasing deployment of artificial intelligence technologies in new situations, new threats will undoubtedly be identified, and therefore the above-mentioned recommendations should not be considered to be sufficient.

[1] https://spectrum.ieee.org/slight-street-sign-modifications-can-fool-machine-learning-algorithms

 


« Späť na zoznam